Oct 02 11:27:58 crc systemd[1]: Starting Kubernetes Kubelet... Oct 02 11:27:58 crc restorecon[4677]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:58 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:59 crc restorecon[4677]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 11:27:59 crc restorecon[4677]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 02 11:28:00 crc kubenswrapper[4725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 02 11:28:00 crc kubenswrapper[4725]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 02 11:28:00 crc kubenswrapper[4725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 02 11:28:00 crc kubenswrapper[4725]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 02 11:28:00 crc kubenswrapper[4725]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 02 11:28:00 crc kubenswrapper[4725]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.870536 4725 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874292 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874320 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874326 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874332 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874340 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874346 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874351 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874359 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874370 4725 feature_gate.go:330] unrecognized feature gate: Example Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874376 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874381 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874391 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874396 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874401 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874405 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874410 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874415 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874420 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874424 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874429 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874433 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874438 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874442 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874451 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874455 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874460 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874465 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874470 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874474 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874479 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874483 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874488 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874492 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874499 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874504 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874511 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874521 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874526 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874534 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874540 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874546 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874551 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874555 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874560 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874570 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874608 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874613 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874623 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874627 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874632 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874636 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874641 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874645 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874650 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874656 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874660 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874665 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874669 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874675 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874679 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874684 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874687 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874694 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874714 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874718 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874738 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874743 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874747 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874751 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874755 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.874760 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881478 4725 flags.go:64] FLAG: --address="0.0.0.0" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881526 4725 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881542 4725 flags.go:64] FLAG: --anonymous-auth="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881551 4725 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881560 4725 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881566 4725 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881574 4725 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881581 4725 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881586 4725 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881591 4725 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881597 4725 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881603 4725 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881608 4725 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881613 4725 flags.go:64] FLAG: --cgroup-root="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881618 4725 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881623 4725 flags.go:64] FLAG: --client-ca-file="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881629 4725 flags.go:64] FLAG: --cloud-config="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881634 4725 flags.go:64] FLAG: --cloud-provider="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881640 4725 flags.go:64] FLAG: --cluster-dns="[]" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881649 4725 flags.go:64] FLAG: --cluster-domain="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881654 4725 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881660 4725 flags.go:64] FLAG: --config-dir="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881666 4725 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881671 4725 flags.go:64] FLAG: --container-log-max-files="5" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881679 4725 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881684 4725 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881690 4725 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881696 4725 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881701 4725 flags.go:64] FLAG: --contention-profiling="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881707 4725 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881712 4725 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881718 4725 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881743 4725 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881752 4725 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881757 4725 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881762 4725 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881767 4725 flags.go:64] FLAG: --enable-load-reader="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881773 4725 flags.go:64] FLAG: --enable-server="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881779 4725 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881788 4725 flags.go:64] FLAG: --event-burst="100" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881793 4725 flags.go:64] FLAG: --event-qps="50" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881798 4725 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881804 4725 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881809 4725 flags.go:64] FLAG: --eviction-hard="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881816 4725 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881821 4725 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881826 4725 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881831 4725 flags.go:64] FLAG: --eviction-soft="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881836 4725 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881841 4725 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881846 4725 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881851 4725 flags.go:64] FLAG: --experimental-mounter-path="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881856 4725 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881862 4725 flags.go:64] FLAG: --fail-swap-on="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881866 4725 flags.go:64] FLAG: --feature-gates="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881873 4725 flags.go:64] FLAG: --file-check-frequency="20s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881878 4725 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881884 4725 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881889 4725 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881894 4725 flags.go:64] FLAG: --healthz-port="10248" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881899 4725 flags.go:64] FLAG: --help="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881905 4725 flags.go:64] FLAG: --hostname-override="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881910 4725 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881915 4725 flags.go:64] FLAG: --http-check-frequency="20s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881920 4725 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881926 4725 flags.go:64] FLAG: --image-credential-provider-config="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881930 4725 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881936 4725 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881940 4725 flags.go:64] FLAG: --image-service-endpoint="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881945 4725 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881950 4725 flags.go:64] FLAG: --kube-api-burst="100" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881955 4725 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881960 4725 flags.go:64] FLAG: --kube-api-qps="50" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881965 4725 flags.go:64] FLAG: --kube-reserved="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881970 4725 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881975 4725 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881980 4725 flags.go:64] FLAG: --kubelet-cgroups="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881985 4725 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881990 4725 flags.go:64] FLAG: --lock-file="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.881995 4725 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882000 4725 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882005 4725 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882013 4725 flags.go:64] FLAG: --log-json-split-stream="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882022 4725 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882029 4725 flags.go:64] FLAG: --log-text-split-stream="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882035 4725 flags.go:64] FLAG: --logging-format="text" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882041 4725 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882046 4725 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882051 4725 flags.go:64] FLAG: --manifest-url="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882056 4725 flags.go:64] FLAG: --manifest-url-header="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882064 4725 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882069 4725 flags.go:64] FLAG: --max-open-files="1000000" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882075 4725 flags.go:64] FLAG: --max-pods="110" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882081 4725 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882086 4725 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882091 4725 flags.go:64] FLAG: --memory-manager-policy="None" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882096 4725 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882102 4725 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882107 4725 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882113 4725 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882135 4725 flags.go:64] FLAG: --node-status-max-images="50" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882140 4725 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882145 4725 flags.go:64] FLAG: --oom-score-adj="-999" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882151 4725 flags.go:64] FLAG: --pod-cidr="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882156 4725 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882165 4725 flags.go:64] FLAG: --pod-manifest-path="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882170 4725 flags.go:64] FLAG: --pod-max-pids="-1" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882175 4725 flags.go:64] FLAG: --pods-per-core="0" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882180 4725 flags.go:64] FLAG: --port="10250" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882185 4725 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882190 4725 flags.go:64] FLAG: --provider-id="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882195 4725 flags.go:64] FLAG: --qos-reserved="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882202 4725 flags.go:64] FLAG: --read-only-port="10255" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882207 4725 flags.go:64] FLAG: --register-node="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882212 4725 flags.go:64] FLAG: --register-schedulable="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882218 4725 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882242 4725 flags.go:64] FLAG: --registry-burst="10" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882247 4725 flags.go:64] FLAG: --registry-qps="5" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882253 4725 flags.go:64] FLAG: --reserved-cpus="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882260 4725 flags.go:64] FLAG: --reserved-memory="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882268 4725 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882273 4725 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882278 4725 flags.go:64] FLAG: --rotate-certificates="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882283 4725 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882288 4725 flags.go:64] FLAG: --runonce="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882293 4725 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882298 4725 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882303 4725 flags.go:64] FLAG: --seccomp-default="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882308 4725 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882313 4725 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882318 4725 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882324 4725 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882329 4725 flags.go:64] FLAG: --storage-driver-password="root" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882334 4725 flags.go:64] FLAG: --storage-driver-secure="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882338 4725 flags.go:64] FLAG: --storage-driver-table="stats" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882344 4725 flags.go:64] FLAG: --storage-driver-user="root" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882349 4725 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882355 4725 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882360 4725 flags.go:64] FLAG: --system-cgroups="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882366 4725 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882375 4725 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882381 4725 flags.go:64] FLAG: --tls-cert-file="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882386 4725 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882393 4725 flags.go:64] FLAG: --tls-min-version="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882398 4725 flags.go:64] FLAG: --tls-private-key-file="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882404 4725 flags.go:64] FLAG: --topology-manager-policy="none" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882409 4725 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882415 4725 flags.go:64] FLAG: --topology-manager-scope="container" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882421 4725 flags.go:64] FLAG: --v="2" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882429 4725 flags.go:64] FLAG: --version="false" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882437 4725 flags.go:64] FLAG: --vmodule="" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882444 4725 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.882450 4725 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882657 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882666 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882675 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882682 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882687 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882694 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882699 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882704 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882711 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882716 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882720 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882742 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882748 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882752 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882757 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882761 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882766 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882770 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882776 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882780 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882785 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882789 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882794 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882799 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882805 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882811 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882816 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882821 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882826 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882831 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882836 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882841 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882846 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882850 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882855 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882859 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882864 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882869 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882875 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882880 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882884 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882888 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882893 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882897 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882904 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882910 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882916 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882922 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882927 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882932 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882937 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882942 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882947 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882951 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882956 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882960 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882964 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882969 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882973 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882978 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882982 4725 feature_gate.go:330] unrecognized feature gate: Example Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882987 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882993 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.882997 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.883001 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.883005 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.883010 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.883014 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.883019 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.883023 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.883028 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.883044 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.907686 4725 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.907768 4725 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907872 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907883 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907890 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907895 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907902 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907909 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907914 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907919 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907924 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907930 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907935 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907940 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907945 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907950 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907956 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907961 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907966 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907971 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907977 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907982 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907988 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907993 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.907998 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908004 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908010 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908015 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908020 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908026 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908031 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908040 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908045 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908051 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908057 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908062 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908068 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908073 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908079 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908084 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908089 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908096 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908104 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908111 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908118 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908124 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908130 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908136 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908141 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908149 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908157 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908163 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908169 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908176 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908181 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908188 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908193 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908198 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908203 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908210 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908216 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908222 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908227 4725 feature_gate.go:330] unrecognized feature gate: Example Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908232 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908237 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908242 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908249 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908257 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908264 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908271 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908277 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908284 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908291 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.908303 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908458 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908467 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908475 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908482 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908489 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908495 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908503 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908509 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908516 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908523 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908529 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908535 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908540 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908547 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908554 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908560 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908565 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908570 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908575 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908580 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908586 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908593 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908599 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908605 4725 feature_gate.go:330] unrecognized feature gate: Example Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908611 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908617 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908622 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908627 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908632 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908638 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908644 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908649 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908654 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908659 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908665 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908671 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908676 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908681 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908687 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908693 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908704 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908713 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908724 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908752 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908760 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908767 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908775 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908782 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908788 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908793 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908800 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908807 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908814 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908820 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908827 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908833 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908840 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908847 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908853 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908859 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908865 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908872 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908878 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908886 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908893 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908901 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908910 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908919 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908927 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908934 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 02 11:28:00 crc kubenswrapper[4725]: W1002 11:28:00.908941 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.908952 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.909998 4725 server.go:940] "Client rotation is on, will bootstrap in background" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.917036 4725 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.917182 4725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.921119 4725 server.go:997] "Starting client certificate rotation" Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.921184 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.923406 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-18 04:02:17.327081212 +0000 UTC Oct 02 11:28:00 crc kubenswrapper[4725]: I1002 11:28:00.923540 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 2584h34m16.403545898s for next certificate rotation Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.001560 4725 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.007192 4725 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.047694 4725 log.go:25] "Validated CRI v1 runtime API" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.128172 4725 log.go:25] "Validated CRI v1 image API" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.132019 4725 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.146924 4725 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-02-11-23-08-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.146965 4725 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.168393 4725 manager.go:217] Machine: {Timestamp:2025-10-02 11:28:01.16477649 +0000 UTC m=+1.072275993 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:40cbc71f-67e8-47ed-8b97-d7af0f87b7bd BootID:fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b2:60:f0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b2:60:f0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:82:3d:7d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f8:fb:8e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:aa:54:4d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:76:6a:92 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:da:e9:f1:9a:ee:2c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f2:05:3f:73:88:a9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.168836 4725 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.169035 4725 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.171235 4725 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.171516 4725 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.171561 4725 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.171833 4725 topology_manager.go:138] "Creating topology manager with none policy" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.171847 4725 container_manager_linux.go:303] "Creating device plugin manager" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.172324 4725 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.172350 4725 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.172625 4725 state_mem.go:36] "Initialized new in-memory state store" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.172744 4725 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.175764 4725 kubelet.go:418] "Attempting to sync node with API server" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.175811 4725 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.175849 4725 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.175868 4725 kubelet.go:324] "Adding apiserver pod source" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.175884 4725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.179441 4725 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.181007 4725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 02 11:28:01 crc kubenswrapper[4725]: W1002 11:28:01.182979 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.183050 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:01 crc kubenswrapper[4725]: W1002 11:28:01.183220 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.183326 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.183472 4725 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191410 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191445 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191452 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191458 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191470 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191477 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191484 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191496 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191505 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191512 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191536 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.191550 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.194237 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.194778 4725 server.go:1280] "Started kubelet" Oct 02 11:28:01 crc systemd[1]: Started Kubernetes Kubelet. Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.196087 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.196275 4725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.196292 4725 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.197142 4725 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.197464 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.197516 4725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.197637 4725 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.197611 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:36:55.679112676 +0000 UTC Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.197684 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1382h8m54.481435497s for next certificate rotation Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.197957 4725 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.198082 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.162:6443: connect: connection refused" interval="200ms" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.198111 4725 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.198131 4725 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.198889 4725 factory.go:55] Registering systemd factory Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.198919 4725 factory.go:221] Registration of the systemd container factory successfully Oct 02 11:28:01 crc kubenswrapper[4725]: W1002 11:28:01.203402 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.203552 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.204053 4725 factory.go:153] Registering CRI-O factory Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.204093 4725 factory.go:221] Registration of the crio container factory successfully Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.204238 4725 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.204279 4725 factory.go:103] Registering Raw factory Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.204317 4725 manager.go:1196] Started watching for new ooms in manager Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.205932 4725 server.go:460] "Adding debug handlers to kubelet server" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.212852 4725 manager.go:319] Starting recovery of all containers Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.222676 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.162:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186aa90dcdd7f7a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-02 11:28:01.194751908 +0000 UTC m=+1.102251371,LastTimestamp:2025-10-02 11:28:01.194751908 +0000 UTC m=+1.102251371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226150 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226245 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226268 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226283 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226305 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226319 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226334 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226354 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226379 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226394 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226407 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226419 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226436 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226458 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226470 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226483 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226498 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226510 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226523 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226536 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226549 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226562 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226573 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226584 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226595 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226607 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.226623 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228542 4725 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228583 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228598 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228611 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228624 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228664 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228676 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228687 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228700 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228711 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228740 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228756 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228768 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228781 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228792 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228803 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228814 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228826 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228836 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228846 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228856 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228868 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228879 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228890 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228901 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228911 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228927 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228941 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228953 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228965 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.228979 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229005 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229017 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229032 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229045 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229056 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229068 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229081 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229092 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229104 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229114 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229125 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229136 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229148 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229159 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229169 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229180 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229191 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229204 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229215 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229226 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229237 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229249 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229261 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229272 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229284 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229296 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229307 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229317 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229332 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229343 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229354 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229366 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229377 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229389 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229400 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229410 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229420 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229430 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229442 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229452 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229464 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229475 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229486 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229497 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229508 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229519 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229540 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229557 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229568 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229578 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229641 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229657 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229671 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229685 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229706 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229718 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229748 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229759 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229770 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229781 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229797 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229810 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229818 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229827 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229835 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229844 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229854 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229863 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229878 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.229989 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230000 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230010 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230018 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230027 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230036 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230044 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230054 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230063 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230076 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230085 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230094 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230102 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230112 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230121 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230129 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230138 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230148 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230158 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230167 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230177 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230185 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230195 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230205 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230212 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230220 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230228 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230236 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230244 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230252 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230260 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230268 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230276 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230286 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230294 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230303 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230312 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230321 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230329 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230339 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230349 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230358 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230367 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230377 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230384 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230393 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230401 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230410 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230418 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230427 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230435 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230476 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230489 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230500 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230513 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230524 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230535 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230543 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230553 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230561 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230569 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230578 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230587 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230596 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230604 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230614 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230625 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230634 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230643 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230653 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230699 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230709 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230732 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230741 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230749 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230757 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230766 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230774 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230782 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230792 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230800 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230809 4725 reconstruct.go:97] "Volume reconstruction finished" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.230815 4725 reconciler.go:26] "Reconciler: start to sync state" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.235468 4725 manager.go:324] Recovery completed Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.244692 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.246111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.246143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.246151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.248851 4725 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.248869 4725 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.248895 4725 state_mem.go:36] "Initialized new in-memory state store" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.264229 4725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.266014 4725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.266733 4725 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.266768 4725 kubelet.go:2335] "Starting kubelet main sync loop" Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.266824 4725 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 02 11:28:01 crc kubenswrapper[4725]: W1002 11:28:01.269694 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.269823 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.298490 4725 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.329040 4725 policy_none.go:49] "None policy: Start" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.330825 4725 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.330864 4725 state_mem.go:35] "Initializing new in-memory state store" Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.367800 4725 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.382591 4725 manager.go:334] "Starting Device Plugin manager" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.382646 4725 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.382660 4725 server.go:79] "Starting device plugin registration server" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.383059 4725 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.383075 4725 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.383223 4725 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.383298 4725 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.383309 4725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.392905 4725 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.398906 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.162:6443: connect: connection refused" interval="400ms" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.483319 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.484579 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.484620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.484631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.484658 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.485128 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.162:6443: connect: connection refused" node="crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.568531 4725 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.568671 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.569709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.569761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.569772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.569891 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.570123 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.570162 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.570770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.570808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.570818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.570934 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.570950 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.570957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.571002 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.571236 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.571265 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.571777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.571796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.571813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.571830 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.571797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.571920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.571991 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.572324 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.572356 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.572995 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.573006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.573014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.573022 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.573022 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.573118 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.573193 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.573526 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.573554 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.574113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.574127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.574132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.574142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.574161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.574150 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.574374 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.574397 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.575180 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.575199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.575208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635227 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635278 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635304 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635328 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635352 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635372 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635389 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635443 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635464 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635563 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635647 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635684 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635703 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635738 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.635754 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.685626 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.686942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.686981 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.686994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.687019 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.687409 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.162:6443: connect: connection refused" node="crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737196 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737280 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737305 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737325 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737347 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737368 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737388 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737401 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737443 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737408 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737465 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737464 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737508 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737492 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737497 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737534 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737504 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737482 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737575 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737596 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737617 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737639 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737651 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737660 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737688 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737708 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737750 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737816 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737847 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.737877 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: E1002 11:28:01.802388 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.162:6443: connect: connection refused" interval="800ms" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.893520 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.911886 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.932947 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.953703 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:01 crc kubenswrapper[4725]: I1002 11:28:01.961792 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 11:28:02 crc kubenswrapper[4725]: W1002 11:28:02.077802 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a8948ff698ef796b83fed53584f1aa5c1f7f5bf7b7dd078c2b01e523dd560609 WatchSource:0}: Error finding container a8948ff698ef796b83fed53584f1aa5c1f7f5bf7b7dd078c2b01e523dd560609: Status 404 returned error can't find the container with id a8948ff698ef796b83fed53584f1aa5c1f7f5bf7b7dd078c2b01e523dd560609 Oct 02 11:28:02 crc kubenswrapper[4725]: W1002 11:28:02.078485 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9c75cbd8026a78f52c9b3dd3175b447116fc23644cb70a317e01c0e083b181a5 WatchSource:0}: Error finding container 9c75cbd8026a78f52c9b3dd3175b447116fc23644cb70a317e01c0e083b181a5: Status 404 returned error can't find the container with id 9c75cbd8026a78f52c9b3dd3175b447116fc23644cb70a317e01c0e083b181a5 Oct 02 11:28:02 crc kubenswrapper[4725]: W1002 11:28:02.081522 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c454617bc10bd6513be74ac17d60bf6cb4e55cedc9812a57569fbd0b718ce952 WatchSource:0}: Error finding container c454617bc10bd6513be74ac17d60bf6cb4e55cedc9812a57569fbd0b718ce952: Status 404 returned error can't find the container with id c454617bc10bd6513be74ac17d60bf6cb4e55cedc9812a57569fbd0b718ce952 Oct 02 11:28:02 crc kubenswrapper[4725]: W1002 11:28:02.082865 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b740003e676337d6e3de092091252522a3b889ba22efee246761786938aa9654 WatchSource:0}: Error finding container b740003e676337d6e3de092091252522a3b889ba22efee246761786938aa9654: Status 404 returned error can't find the container with id b740003e676337d6e3de092091252522a3b889ba22efee246761786938aa9654 Oct 02 11:28:02 crc kubenswrapper[4725]: W1002 11:28:02.085619 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-3a71835402a3b0d9874b5f82a507425ff47e621f3e7e7275f80afb16d15bfe6f WatchSource:0}: Error finding container 3a71835402a3b0d9874b5f82a507425ff47e621f3e7e7275f80afb16d15bfe6f: Status 404 returned error can't find the container with id 3a71835402a3b0d9874b5f82a507425ff47e621f3e7e7275f80afb16d15bfe6f Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.087562 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.088461 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.088492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.088503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.088526 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 11:28:02 crc kubenswrapper[4725]: E1002 11:28:02.089318 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.162:6443: connect: connection refused" node="crc" Oct 02 11:28:02 crc kubenswrapper[4725]: W1002 11:28:02.170769 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:02 crc kubenswrapper[4725]: E1002 11:28:02.170864 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.197344 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:02 crc kubenswrapper[4725]: E1002 11:28:02.256889 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.162:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186aa90dcdd7f7a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-02 11:28:01.194751908 +0000 UTC m=+1.102251371,LastTimestamp:2025-10-02 11:28:01.194751908 +0000 UTC m=+1.102251371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 02 11:28:02 crc kubenswrapper[4725]: W1002 11:28:02.269635 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:02 crc kubenswrapper[4725]: E1002 11:28:02.269717 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.272984 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3a71835402a3b0d9874b5f82a507425ff47e621f3e7e7275f80afb16d15bfe6f"} Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.274411 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c454617bc10bd6513be74ac17d60bf6cb4e55cedc9812a57569fbd0b718ce952"} Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.275532 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b740003e676337d6e3de092091252522a3b889ba22efee246761786938aa9654"} Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.276610 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9c75cbd8026a78f52c9b3dd3175b447116fc23644cb70a317e01c0e083b181a5"} Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.277713 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a8948ff698ef796b83fed53584f1aa5c1f7f5bf7b7dd078c2b01e523dd560609"} Oct 02 11:28:02 crc kubenswrapper[4725]: W1002 11:28:02.285701 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:02 crc kubenswrapper[4725]: E1002 11:28:02.285801 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:02 crc kubenswrapper[4725]: W1002 11:28:02.375615 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:02 crc kubenswrapper[4725]: E1002 11:28:02.375695 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:02 crc kubenswrapper[4725]: E1002 11:28:02.603376 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.162:6443: connect: connection refused" interval="1.6s" Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.890436 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.891956 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.892017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.892035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:02 crc kubenswrapper[4725]: I1002 11:28:02.892078 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 11:28:02 crc kubenswrapper[4725]: E1002 11:28:02.892838 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.162:6443: connect: connection refused" node="crc" Oct 02 11:28:03 crc kubenswrapper[4725]: I1002 11:28:03.197502 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.197248 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:04 crc kubenswrapper[4725]: E1002 11:28:04.204194 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.162:6443: connect: connection refused" interval="3.2s" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.284052 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8bbb5647f9664d775bfc1967a8cf84f9540542ba750c94b469f8da895f2db3fc" exitCode=0 Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.284167 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8bbb5647f9664d775bfc1967a8cf84f9540542ba750c94b469f8da895f2db3fc"} Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.284173 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.285429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.285475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.285486 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.285813 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e"} Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.288483 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2" exitCode=0 Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.288562 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2"} Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.288595 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.289635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.289692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.289705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.290513 4725 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc" exitCode=0 Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.290587 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.290585 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc"} Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.291067 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.291482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.291548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.291560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.292227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.292265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.292275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.292969 4725 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7" exitCode=0 Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.293017 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7"} Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.293071 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.296858 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.296912 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.296924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.493903 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.495914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.495988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.495998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:04 crc kubenswrapper[4725]: I1002 11:28:04.496036 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 11:28:04 crc kubenswrapper[4725]: E1002 11:28:04.496916 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.162:6443: connect: connection refused" node="crc" Oct 02 11:28:04 crc kubenswrapper[4725]: W1002 11:28:04.889591 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:04 crc kubenswrapper[4725]: E1002 11:28:04.889694 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:04 crc kubenswrapper[4725]: W1002 11:28:04.908130 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:04 crc kubenswrapper[4725]: E1002 11:28:04.908214 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:05 crc kubenswrapper[4725]: I1002 11:28:05.196911 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:05 crc kubenswrapper[4725]: W1002 11:28:05.261931 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:05 crc kubenswrapper[4725]: E1002 11:28:05.261999 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:05 crc kubenswrapper[4725]: I1002 11:28:05.296823 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8"} Oct 02 11:28:05 crc kubenswrapper[4725]: W1002 11:28:05.306608 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:05 crc kubenswrapper[4725]: E1002 11:28:05.306676 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:06 crc kubenswrapper[4725]: I1002 11:28:06.197675 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:06 crc kubenswrapper[4725]: I1002 11:28:06.302106 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="11e8509750bf549d1d8910bc438103a00ba8a698a9fb44b3b42b90f165ca462c" exitCode=0 Oct 02 11:28:06 crc kubenswrapper[4725]: I1002 11:28:06.302198 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"11e8509750bf549d1d8910bc438103a00ba8a698a9fb44b3b42b90f165ca462c"} Oct 02 11:28:06 crc kubenswrapper[4725]: I1002 11:28:06.305715 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9"} Oct 02 11:28:06 crc kubenswrapper[4725]: I1002 11:28:06.309300 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef"} Oct 02 11:28:06 crc kubenswrapper[4725]: I1002 11:28:06.312167 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c2ef69dfbc80f03ea89df8bfc959db06fb614b8759b135d11b688bddc446da13"} Oct 02 11:28:07 crc kubenswrapper[4725]: I1002 11:28:07.196940 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:07 crc kubenswrapper[4725]: E1002 11:28:07.405909 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.162:6443: connect: connection refused" interval="6.4s" Oct 02 11:28:07 crc kubenswrapper[4725]: I1002 11:28:07.697657 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:07 crc kubenswrapper[4725]: I1002 11:28:07.698961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:07 crc kubenswrapper[4725]: I1002 11:28:07.699031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:07 crc kubenswrapper[4725]: I1002 11:28:07.699052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:07 crc kubenswrapper[4725]: I1002 11:28:07.699107 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 11:28:07 crc kubenswrapper[4725]: E1002 11:28:07.699708 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.162:6443: connect: connection refused" node="crc" Oct 02 11:28:08 crc kubenswrapper[4725]: I1002 11:28:08.197526 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:08 crc kubenswrapper[4725]: I1002 11:28:08.376990 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9"} Oct 02 11:28:08 crc kubenswrapper[4725]: I1002 11:28:08.379438 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5"} Oct 02 11:28:08 crc kubenswrapper[4725]: I1002 11:28:08.379492 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:08 crc kubenswrapper[4725]: I1002 11:28:08.379518 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:08 crc kubenswrapper[4725]: I1002 11:28:08.382008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:08 crc kubenswrapper[4725]: I1002 11:28:08.382058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:08 crc kubenswrapper[4725]: I1002 11:28:08.382074 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:08 crc kubenswrapper[4725]: I1002 11:28:08.382593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:08 crc kubenswrapper[4725]: I1002 11:28:08.382639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:08 crc kubenswrapper[4725]: I1002 11:28:08.382659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:09 crc kubenswrapper[4725]: W1002 11:28:09.148920 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:09 crc kubenswrapper[4725]: E1002 11:28:09.149022 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:09 crc kubenswrapper[4725]: I1002 11:28:09.197577 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:09 crc kubenswrapper[4725]: W1002 11:28:09.247397 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:09 crc kubenswrapper[4725]: E1002 11:28:09.247515 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:09 crc kubenswrapper[4725]: I1002 11:28:09.385646 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4"} Oct 02 11:28:09 crc kubenswrapper[4725]: W1002 11:28:09.767832 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:09 crc kubenswrapper[4725]: E1002 11:28:09.767942 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:10 crc kubenswrapper[4725]: W1002 11:28:10.173213 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:10 crc kubenswrapper[4725]: E1002 11:28:10.173326 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.162:6443: connect: connection refused" logger="UnhandledError" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.197182 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.392716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5"} Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.392824 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.393879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.393909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.393926 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.396006 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="876eafd82a9a3b14d78c648e3e452f040ec573222da30febda664f0c21562e59" exitCode=0 Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.396068 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"876eafd82a9a3b14d78c648e3e452f040ec573222da30febda664f0c21562e59"} Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.396227 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.397710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.397778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.397790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.400636 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3"} Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.400689 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.402357 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.402380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.402391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.403963 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a"} Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.403986 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0"} Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.468114 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 11:28:10 crc kubenswrapper[4725]: I1002 11:28:10.673093 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.198183 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:11 crc kubenswrapper[4725]: E1002 11:28:11.393087 4725 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.419187 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1d500114443eeeda4e9514272e76925e7cd8a713efff97ebb1c5324fb8e41777"} Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.419259 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cce36863b42c127f39453fe0cce051afb34926b6f0393923fed412d0b3a2e884"} Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.424475 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a"} Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.424611 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.424641 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.424684 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.426117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.426158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.426172 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.426225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.426266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.426281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.427954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.427996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:11 crc kubenswrapper[4725]: I1002 11:28:11.428011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.197060 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.162:6443: connect: connection refused Oct 02 11:28:12 crc kubenswrapper[4725]: E1002 11:28:12.258659 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.162:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186aa90dcdd7f7a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-02 11:28:01.194751908 +0000 UTC m=+1.102251371,LastTimestamp:2025-10-02 11:28:01.194751908 +0000 UTC m=+1.102251371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.432002 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7d19c2204cce60d14c8f1cdbc05bd20826b93b9e1f0749d2d4e4e8a7fca3fc7e"} Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.432062 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"05e5befac7e7db5a04d2f7e4430a47239bef9314ede9a154a04d880253f8a73e"} Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.432089 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"979eca9dc593f436ee1e3806edb728484fa0df485d402514b107092382cd2fbf"} Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.432128 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.433232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.433293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.433313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.433776 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.435483 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a" exitCode=255 Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.435582 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a"} Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.435638 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.435651 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.435708 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.437046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.437071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.437099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.437107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.437120 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.437125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.437527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.437597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.437618 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.439370 4725 scope.go:117] "RemoveContainer" containerID="f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a" Oct 02 11:28:12 crc kubenswrapper[4725]: I1002 11:28:12.966648 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:13 crc kubenswrapper[4725]: I1002 11:28:13.308798 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:13 crc kubenswrapper[4725]: I1002 11:28:13.439199 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 02 11:28:13 crc kubenswrapper[4725]: I1002 11:28:13.440635 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a"} Oct 02 11:28:13 crc kubenswrapper[4725]: I1002 11:28:13.440663 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:13 crc kubenswrapper[4725]: I1002 11:28:13.440674 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:13 crc kubenswrapper[4725]: I1002 11:28:13.441533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:13 crc kubenswrapper[4725]: I1002 11:28:13.441563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:13 crc kubenswrapper[4725]: I1002 11:28:13.441574 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:13 crc kubenswrapper[4725]: I1002 11:28:13.442413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:13 crc kubenswrapper[4725]: I1002 11:28:13.442453 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:13 crc kubenswrapper[4725]: I1002 11:28:13.442464 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:14 crc kubenswrapper[4725]: I1002 11:28:14.100017 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:14 crc kubenswrapper[4725]: I1002 11:28:14.101102 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:14 crc kubenswrapper[4725]: I1002 11:28:14.101143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:14 crc kubenswrapper[4725]: I1002 11:28:14.101157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:14 crc kubenswrapper[4725]: I1002 11:28:14.101181 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 11:28:14 crc kubenswrapper[4725]: I1002 11:28:14.443402 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:14 crc kubenswrapper[4725]: I1002 11:28:14.443520 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:14 crc kubenswrapper[4725]: I1002 11:28:14.444794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:14 crc kubenswrapper[4725]: I1002 11:28:14.444908 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:14 crc kubenswrapper[4725]: I1002 11:28:14.445004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:14 crc kubenswrapper[4725]: I1002 11:28:14.453192 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:15 crc kubenswrapper[4725]: I1002 11:28:15.058385 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 02 11:28:15 crc kubenswrapper[4725]: I1002 11:28:15.059197 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:15 crc kubenswrapper[4725]: I1002 11:28:15.060953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:15 crc kubenswrapper[4725]: I1002 11:28:15.061161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:15 crc kubenswrapper[4725]: I1002 11:28:15.061303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:15 crc kubenswrapper[4725]: I1002 11:28:15.445863 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:15 crc kubenswrapper[4725]: I1002 11:28:15.446750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:15 crc kubenswrapper[4725]: I1002 11:28:15.446775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:15 crc kubenswrapper[4725]: I1002 11:28:15.446783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:16 crc kubenswrapper[4725]: I1002 11:28:16.448807 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:16 crc kubenswrapper[4725]: I1002 11:28:16.450014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:16 crc kubenswrapper[4725]: I1002 11:28:16.450071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:16 crc kubenswrapper[4725]: I1002 11:28:16.450080 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.239054 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.239353 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.240941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.241028 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.241050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.399177 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.399395 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.400892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.400998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.401027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.407467 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.453807 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.454999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.455039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.455051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.468407 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:18 crc kubenswrapper[4725]: I1002 11:28:18.557955 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:19 crc kubenswrapper[4725]: I1002 11:28:19.456809 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:19 crc kubenswrapper[4725]: I1002 11:28:19.458518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:19 crc kubenswrapper[4725]: I1002 11:28:19.458676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:19 crc kubenswrapper[4725]: I1002 11:28:19.458821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:19 crc kubenswrapper[4725]: I1002 11:28:19.932276 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:20 crc kubenswrapper[4725]: I1002 11:28:20.459473 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:20 crc kubenswrapper[4725]: I1002 11:28:20.461531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:20 crc kubenswrapper[4725]: I1002 11:28:20.461597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:20 crc kubenswrapper[4725]: I1002 11:28:20.461622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:21 crc kubenswrapper[4725]: E1002 11:28:21.393240 4725 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 02 11:28:21 crc kubenswrapper[4725]: I1002 11:28:21.461848 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:21 crc kubenswrapper[4725]: I1002 11:28:21.462991 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:21 crc kubenswrapper[4725]: I1002 11:28:21.463075 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:21 crc kubenswrapper[4725]: I1002 11:28:21.463103 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:21 crc kubenswrapper[4725]: I1002 11:28:21.557970 4725 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 02 11:28:21 crc kubenswrapper[4725]: I1002 11:28:21.558084 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 02 11:28:23 crc kubenswrapper[4725]: I1002 11:28:23.198390 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 02 11:28:23 crc kubenswrapper[4725]: E1002 11:28:23.806857 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Oct 02 11:28:24 crc kubenswrapper[4725]: E1002 11:28:24.102448 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Oct 02 11:28:24 crc kubenswrapper[4725]: I1002 11:28:24.153938 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 02 11:28:24 crc kubenswrapper[4725]: I1002 11:28:24.153996 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 02 11:28:24 crc kubenswrapper[4725]: I1002 11:28:24.158675 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 02 11:28:24 crc kubenswrapper[4725]: I1002 11:28:24.158756 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 02 11:28:24 crc kubenswrapper[4725]: I1002 11:28:24.458873 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]log ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]etcd ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/generic-apiserver-start-informers ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/priority-and-fairness-filter ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/start-apiextensions-informers ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/start-apiextensions-controllers ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/crd-informer-synced ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/start-system-namespaces-controller ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 02 11:28:24 crc kubenswrapper[4725]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 02 11:28:24 crc kubenswrapper[4725]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/bootstrap-controller ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/start-kube-aggregator-informers ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/apiservice-registration-controller ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/apiservice-discovery-controller ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]autoregister-completion ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/apiservice-openapi-controller ok Oct 02 11:28:24 crc kubenswrapper[4725]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 02 11:28:24 crc kubenswrapper[4725]: livez check failed Oct 02 11:28:24 crc kubenswrapper[4725]: I1002 11:28:24.458948 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:28:28 crc kubenswrapper[4725]: I1002 11:28:28.267550 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 02 11:28:28 crc kubenswrapper[4725]: I1002 11:28:28.267698 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:28 crc kubenswrapper[4725]: I1002 11:28:28.268631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:28 crc kubenswrapper[4725]: I1002 11:28:28.268686 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:28 crc kubenswrapper[4725]: I1002 11:28:28.268698 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:28 crc kubenswrapper[4725]: I1002 11:28:28.279354 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 02 11:28:28 crc kubenswrapper[4725]: I1002 11:28:28.478054 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:28 crc kubenswrapper[4725]: I1002 11:28:28.478981 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:28 crc kubenswrapper[4725]: I1002 11:28:28.479026 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:28 crc kubenswrapper[4725]: I1002 11:28:28.479043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.153098 4725 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.153515 4725 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.154624 4725 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.154780 4725 trace.go:236] Trace[191124862]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Oct-2025 11:28:18.890) (total time: 10264ms): Oct 02 11:28:29 crc kubenswrapper[4725]: Trace[191124862]: ---"Objects listed" error: 10264ms (11:28:29.154) Oct 02 11:28:29 crc kubenswrapper[4725]: Trace[191124862]: [10.264666601s] [10.264666601s] END Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.154802 4725 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.158098 4725 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.191063 4725 apiserver.go:52] "Watching apiserver" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.195901 4725 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.196160 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.196450 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.196503 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.196676 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.196849 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.196876 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.196986 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.196762 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.197011 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.197736 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.198908 4725 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.201281 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.202186 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.202253 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.202576 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.202649 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.202581 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.204545 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.204588 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.204677 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.207618 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49178->192.168.126.11:17697: read: connection reset by peer" start-of-body= Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.207656 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49178->192.168.126.11:17697: read: connection reset by peer" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.227571 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.236064 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.245556 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.254926 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.254969 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.254993 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255010 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255029 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255043 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255059 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255080 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255097 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255113 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255130 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255129 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255286 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255319 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255406 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255450 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255508 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255511 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255577 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255683 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255146 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255746 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255762 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255764 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255714 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255771 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255781 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.255910 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256053 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256065 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256067 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256125 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256151 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256169 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256184 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.256223 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:28:29.756187871 +0000 UTC m=+29.663687504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256228 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256250 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256282 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256311 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256343 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256370 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256372 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256399 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256428 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256455 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256481 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256512 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256527 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256542 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256552 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256574 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256603 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256621 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256630 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256665 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256692 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256716 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256763 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256791 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256816 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256835 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256847 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256881 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256906 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256932 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256962 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256986 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257017 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257042 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257064 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257092 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257118 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257139 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257165 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257194 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257215 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257231 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257255 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257279 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257302 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257318 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257339 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256839 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256903 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256975 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.256990 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257046 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257134 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257198 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257313 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257343 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257446 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257567 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257737 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257765 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257835 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.257362 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258095 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258139 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258175 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258196 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258217 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258238 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258259 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258281 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258305 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258326 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258342 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258361 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258377 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258402 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258420 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258437 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258453 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258472 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258494 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258526 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258542 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258561 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258582 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258600 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258615 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258633 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258648 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258663 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258679 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258695 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258712 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258781 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258806 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258824 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258844 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258865 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258884 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258904 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258926 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258949 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258972 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.258998 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259021 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259042 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259074 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259089 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259104 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259122 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259138 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259153 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259169 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259190 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259211 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259231 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259252 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259276 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259297 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259319 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259338 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259355 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259370 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259386 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259407 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259435 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259457 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259480 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259505 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259529 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259552 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259574 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259595 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259618 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259641 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259664 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259688 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259712 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259758 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259785 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259813 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259837 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259867 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259889 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259911 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259933 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.259981 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260006 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260030 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260053 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260077 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260101 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260126 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260149 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260171 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260194 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260218 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260245 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260270 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260294 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260316 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260340 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260364 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260386 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260407 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260430 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260455 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260478 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260503 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260527 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260588 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260612 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260636 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260659 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260683 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260707 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260748 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260773 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260797 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260819 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260841 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260864 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260887 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260911 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260939 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260962 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.260986 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261012 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261036 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261061 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261083 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261107 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261130 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261152 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261174 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261225 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261257 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261283 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261310 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261338 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261362 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261393 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261417 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261443 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261471 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261496 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261518 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261546 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261572 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261674 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261691 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.261705 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.262088 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.262109 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.262125 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.262140 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.262153 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.262168 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.262175 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.262182 4725 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.262223 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.262907 4725 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.263591 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.263695 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.263759 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.263897 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.263999 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.264150 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.264297 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.264555 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.264823 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.264832 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.264860 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.264876 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.264911 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.265505 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.265614 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.265670 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.266113 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.266345 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.266507 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.266707 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.266806 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.268033 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.268275 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.268328 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.268600 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.268614 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.268909 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.269054 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.269208 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.269763 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.269978 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270019 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270091 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270354 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270428 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270483 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270637 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270696 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270735 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270740 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270756 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270760 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270814 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270833 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270848 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270863 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270877 4725 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270890 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270904 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270918 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270935 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270949 4725 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270964 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270977 4725 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270990 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271008 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271023 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271037 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271050 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271065 4725 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271079 4725 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271095 4725 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.270932 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271106 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271568 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271608 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271807 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271086 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.271948 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.272098 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.272265 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.272599 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.272609 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.272707 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.273084 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.273107 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.273915 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.274220 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.274299 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.274303 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.274428 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.274466 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.274607 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.274374 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.274619 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.274810 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.275036 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.275032 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.275148 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.275420 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.275465 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.275474 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.275560 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.275645 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.275828 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.275864 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.276096 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.276099 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.276312 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.276415 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.276696 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.276895 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.278028 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.278597 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.278636 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.278834 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.278955 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.279231 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.279416 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.279470 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.279543 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.279751 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.279770 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.279920 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.280098 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.280186 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.281150 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.281223 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:29.781201781 +0000 UTC m=+29.688701444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.281319 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.281397 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:29.781368945 +0000 UTC m=+29.688868498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.282301 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.282996 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.283125 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.283206 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.283756 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.284116 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.286520 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.286864 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.289266 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.289599 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.291311 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.292943 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.293637 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.293983 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.294003 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.294014 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.294069 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:29.794052165 +0000 UTC m=+29.701551628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.294151 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.295222 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.295801 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.296379 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.296445 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.296551 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.296749 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.296902 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.296956 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.297050 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.297273 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.297298 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.297310 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.297356 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:29.797338116 +0000 UTC m=+29.704837569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.298248 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.298331 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.298377 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.298562 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.303577 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.646423 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.646742 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.646917 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.646957 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.647000 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.647184 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.647323 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.647478 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.647619 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.647985 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.648014 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.648455 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.648647 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.648800 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.649057 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.649095 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.649316 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.649505 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.649604 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.650162 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.652786 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.653203 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.655048 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.655257 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656129 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656160 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656456 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656501 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656586 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656687 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656879 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656906 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656918 4725 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656936 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656968 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.656991 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657003 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657016 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657033 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657042 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657051 4725 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657084 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657102 4725 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657117 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657129 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657141 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657188 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657209 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657219 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657232 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657241 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657249 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657261 4725 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657272 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657280 4725 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657289 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657297 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657311 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.657320 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.658372 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.658460 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.658539 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.658617 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.658628 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.658761 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.659526 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.659574 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.660009 4725 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.660041 4725 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.660053 4725 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.660096 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.660121 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.660135 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.660283 4725 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664011 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664105 4725 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664146 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664157 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664168 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664178 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664186 4725 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664196 4725 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664222 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664231 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664240 4725 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664250 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664322 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664401 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664413 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664424 4725 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664433 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664442 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664468 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664497 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664506 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664515 4725 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664557 4725 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664570 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664580 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664590 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664598 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664607 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.662149 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664272 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664636 4725 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664706 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664741 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664911 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664970 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.664998 4725 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665020 4725 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665040 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665058 4725 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665076 4725 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665096 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665102 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665115 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665137 4725 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665156 4725 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665174 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665241 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665284 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665305 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665324 4725 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665342 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665362 4725 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665381 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665398 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665417 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665434 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665451 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665470 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665487 4725 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665505 4725 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665522 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665540 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665557 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665573 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665590 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665607 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665627 4725 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665645 4725 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665664 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665683 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665701 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665718 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665761 4725 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665779 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665803 4725 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665821 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665838 4725 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665855 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665872 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665889 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665906 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665925 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665942 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665959 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665975 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.665991 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666020 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666038 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666055 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666073 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666089 4725 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666106 4725 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666123 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666140 4725 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666156 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666193 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666211 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666231 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666248 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666265 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666282 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666299 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666316 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666333 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666349 4725 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666366 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.666386 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.667502 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.668202 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.668277 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.668764 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.669073 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.669169 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.670257 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.670627 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.671499 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.673600 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.673975 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.674582 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.675918 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.676775 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.678767 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.682048 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.686928 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.688241 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.690315 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.691178 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.691963 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.694059 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.694904 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.696566 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.697292 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.698035 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.699340 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.701121 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.701620 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.703253 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.704125 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.704892 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.705950 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.706424 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.707357 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.708360 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.709236 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.710103 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.710569 4725 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.710671 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.713454 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.714141 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.715302 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.717311 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.718088 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.718977 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.719573 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.722441 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.722958 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.723938 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.724580 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.725805 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.726295 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.727300 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.727802 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.728917 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.729365 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.730242 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.730769 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.731359 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.732289 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.732752 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.733631 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.733672 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.734551 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.734616 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.739584 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.740752 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.744383 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.747532 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.757914 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.765505 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767533 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767644 4725 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767664 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767680 4725 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767689 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767698 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767715 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767736 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767745 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767773 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767781 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767790 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767798 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767806 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767816 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767824 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767832 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767840 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767848 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767857 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767867 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767875 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.767883 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.767958 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:28:30.767936986 +0000 UTC m=+30.675436449 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.768260 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.779023 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.787776 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.801610 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.810112 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.811809 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.820167 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.820153 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.826631 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.833664 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.853032 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: W1002 11:28:29.856211 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-68be23c89973600c85fb5f3afc03fea6401b49bf76fcbdfe914602b29ccbe782 WatchSource:0}: Error finding container 68be23c89973600c85fb5f3afc03fea6401b49bf76fcbdfe914602b29ccbe782: Status 404 returned error can't find the container with id 68be23c89973600c85fb5f3afc03fea6401b49bf76fcbdfe914602b29ccbe782 Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.868935 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.868989 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.869023 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.869053 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869186 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869212 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869225 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869277 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:30.869259611 +0000 UTC m=+30.776759074 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869347 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869377 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:30.869368544 +0000 UTC m=+30.776868007 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869415 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869444 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:30.869436055 +0000 UTC m=+30.776935518 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869464 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869480 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869489 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:29 crc kubenswrapper[4725]: E1002 11:28:29.869529 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:30.869519768 +0000 UTC m=+30.777019231 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.872253 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:11Z\\\",\\\"message\\\":\\\"W1002 11:28:11.173672 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1002 11:28:11.173936 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759404491 cert, and key in /tmp/serving-cert-1279338611/serving-signer.crt, /tmp/serving-cert-1279338611/serving-signer.key\\\\nI1002 11:28:11.483292 1 observer_polling.go:159] Starting file observer\\\\nW1002 11:28:11.491714 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 11:28:11.491998 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:11.501496 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1279338611/tls.crt::/tmp/serving-cert-1279338611/tls.key\\\\\\\"\\\\nF1002 11:28:11.659485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.890924 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.911852 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:29 crc kubenswrapper[4725]: I1002 11:28:29.938575 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.127489 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-lv8cx"] Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.127951 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-zs4dp"] Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.128127 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.128149 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8rrpk"] Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.128975 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2q2jl"] Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.129195 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.129585 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.130040 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zs4dp" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.134582 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.134813 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.134824 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.135092 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.135563 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.135797 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.135881 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.135945 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.136156 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.136443 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.137543 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.137645 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.137761 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.137975 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.138124 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.153611 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.170926 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171144 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-conf-dir\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171184 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-cni-dir\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171204 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-etc-kubernetes\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171224 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a370297b-12f2-48c3-9097-f8727e57baa1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171245 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-system-cni-dir\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171264 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-socket-dir-parent\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171283 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-system-cni-dir\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171317 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/05ab4d5f-f28b-40a8-af40-baa85450dec4-hosts-file\") pod \"node-resolver-zs4dp\" (UID: \"05ab4d5f-f28b-40a8-af40-baa85450dec4\") " pod="openshift-dns/node-resolver-zs4dp" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171336 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmmw9\" (UniqueName: \"kubernetes.io/projected/1e9bad7c-78f8-435d-8449-7c5b04a16869-kube-api-access-gmmw9\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171355 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-var-lib-kubelet\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171373 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-daemon-config\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171392 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-run-multus-certs\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171412 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbcrf\" (UniqueName: \"kubernetes.io/projected/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-kube-api-access-bbcrf\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171430 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a370297b-12f2-48c3-9097-f8727e57baa1-cni-binary-copy\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171447 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1e9bad7c-78f8-435d-8449-7c5b04a16869-rootfs\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171468 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-hostroot\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171484 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171502 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e9bad7c-78f8-435d-8449-7c5b04a16869-proxy-tls\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171522 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-cni-binary-copy\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171541 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-var-lib-cni-bin\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171558 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-var-lib-cni-multus\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171576 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-cnibin\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171606 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrnkd\" (UniqueName: \"kubernetes.io/projected/05ab4d5f-f28b-40a8-af40-baa85450dec4-kube-api-access-wrnkd\") pod \"node-resolver-zs4dp\" (UID: \"05ab4d5f-f28b-40a8-af40-baa85450dec4\") " pod="openshift-dns/node-resolver-zs4dp" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171623 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4rkz\" (UniqueName: \"kubernetes.io/projected/a370297b-12f2-48c3-9097-f8727e57baa1-kube-api-access-j4rkz\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171644 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e9bad7c-78f8-435d-8449-7c5b04a16869-mcd-auth-proxy-config\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171661 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-cnibin\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171677 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-run-k8s-cni-cncf-io\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171694 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-os-release\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171711 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-os-release\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.171760 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-run-netns\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.183484 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.196333 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.208528 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.216318 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.227622 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.236119 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.250205 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:11Z\\\",\\\"message\\\":\\\"W1002 11:28:11.173672 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1002 11:28:11.173936 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759404491 cert, and key in /tmp/serving-cert-1279338611/serving-signer.crt, /tmp/serving-cert-1279338611/serving-signer.key\\\\nI1002 11:28:11.483292 1 observer_polling.go:159] Starting file observer\\\\nW1002 11:28:11.491714 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 11:28:11.491998 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:11.501496 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1279338611/tls.crt::/tmp/serving-cert-1279338611/tls.key\\\\\\\"\\\\nF1002 11:28:11.659485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.261134 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.267654 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.267777 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.267669 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.267847 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272559 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrnkd\" (UniqueName: \"kubernetes.io/projected/05ab4d5f-f28b-40a8-af40-baa85450dec4-kube-api-access-wrnkd\") pod \"node-resolver-zs4dp\" (UID: \"05ab4d5f-f28b-40a8-af40-baa85450dec4\") " pod="openshift-dns/node-resolver-zs4dp" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272603 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4rkz\" (UniqueName: \"kubernetes.io/projected/a370297b-12f2-48c3-9097-f8727e57baa1-kube-api-access-j4rkz\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272629 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e9bad7c-78f8-435d-8449-7c5b04a16869-mcd-auth-proxy-config\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272651 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-cnibin\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-run-k8s-cni-cncf-io\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272689 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-os-release\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272711 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-os-release\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272772 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-run-netns\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272794 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-conf-dir\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272814 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-cni-dir\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272836 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-etc-kubernetes\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272859 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a370297b-12f2-48c3-9097-f8727e57baa1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272883 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-system-cni-dir\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272903 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-socket-dir-parent\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272925 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-system-cni-dir\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272961 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/05ab4d5f-f28b-40a8-af40-baa85450dec4-hosts-file\") pod \"node-resolver-zs4dp\" (UID: \"05ab4d5f-f28b-40a8-af40-baa85450dec4\") " pod="openshift-dns/node-resolver-zs4dp" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.272984 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-run-multus-certs\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273007 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbcrf\" (UniqueName: \"kubernetes.io/projected/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-kube-api-access-bbcrf\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273032 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a370297b-12f2-48c3-9097-f8727e57baa1-cni-binary-copy\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273057 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmmw9\" (UniqueName: \"kubernetes.io/projected/1e9bad7c-78f8-435d-8449-7c5b04a16869-kube-api-access-gmmw9\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273080 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-var-lib-kubelet\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273102 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-daemon-config\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273124 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1e9bad7c-78f8-435d-8449-7c5b04a16869-rootfs\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273144 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-hostroot\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273167 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-cni-binary-copy\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273190 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-var-lib-cni-bin\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273214 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-var-lib-cni-multus\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273239 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273266 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e9bad7c-78f8-435d-8449-7c5b04a16869-proxy-tls\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273289 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-cnibin\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273302 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1e9bad7c-78f8-435d-8449-7c5b04a16869-rootfs\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273361 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-cnibin\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273365 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-hostroot\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273585 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-var-lib-kubelet\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.273828 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a370297b-12f2-48c3-9097-f8727e57baa1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274109 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-cni-binary-copy\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274152 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a370297b-12f2-48c3-9097-f8727e57baa1-cni-binary-copy\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274187 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-var-lib-cni-multus\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274160 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-var-lib-cni-bin\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274205 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-daemon-config\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274231 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-run-multus-certs\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274271 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/05ab4d5f-f28b-40a8-af40-baa85450dec4-hosts-file\") pod \"node-resolver-zs4dp\" (UID: \"05ab4d5f-f28b-40a8-af40-baa85450dec4\") " pod="openshift-dns/node-resolver-zs4dp" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274425 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-os-release\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274464 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-cnibin\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274487 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-run-k8s-cni-cncf-io\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274523 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-os-release\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274565 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-system-cni-dir\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274591 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-socket-dir-parent\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274603 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-conf-dir\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274652 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e9bad7c-78f8-435d-8449-7c5b04a16869-mcd-auth-proxy-config\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274688 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-host-run-netns\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274693 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-etc-kubernetes\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274767 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-multus-cni-dir\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.274802 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-system-cni-dir\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.276985 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.281967 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a370297b-12f2-48c3-9097-f8727e57baa1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.285978 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e9bad7c-78f8-435d-8449-7c5b04a16869-proxy-tls\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.293306 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmmw9\" (UniqueName: \"kubernetes.io/projected/1e9bad7c-78f8-435d-8449-7c5b04a16869-kube-api-access-gmmw9\") pod \"machine-config-daemon-lv8cx\" (UID: \"1e9bad7c-78f8-435d-8449-7c5b04a16869\") " pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.294095 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4rkz\" (UniqueName: \"kubernetes.io/projected/a370297b-12f2-48c3-9097-f8727e57baa1-kube-api-access-j4rkz\") pod \"multus-additional-cni-plugins-8rrpk\" (UID: \"a370297b-12f2-48c3-9097-f8727e57baa1\") " pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.296534 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrnkd\" (UniqueName: \"kubernetes.io/projected/05ab4d5f-f28b-40a8-af40-baa85450dec4-kube-api-access-wrnkd\") pod \"node-resolver-zs4dp\" (UID: \"05ab4d5f-f28b-40a8-af40-baa85450dec4\") " pod="openshift-dns/node-resolver-zs4dp" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.296831 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbcrf\" (UniqueName: \"kubernetes.io/projected/15fc62f2-0a7e-477c-8e35-0888c40e2d6c-kube-api-access-bbcrf\") pod \"multus-2q2jl\" (UID: \"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\") " pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.299952 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.313487 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.325168 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.335845 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.344464 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.355196 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.366926 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:11Z\\\",\\\"message\\\":\\\"W1002 11:28:11.173672 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1002 11:28:11.173936 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759404491 cert, and key in /tmp/serving-cert-1279338611/serving-signer.crt, /tmp/serving-cert-1279338611/serving-signer.key\\\\nI1002 11:28:11.483292 1 observer_polling.go:159] Starting file observer\\\\nW1002 11:28:11.491714 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 11:28:11.491998 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:11.501496 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1279338611/tls.crt::/tmp/serving-cert-1279338611/tls.key\\\\\\\"\\\\nF1002 11:28:11.659485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.378804 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.390242 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.398492 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.440705 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.446782 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2q2jl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.453965 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.469339 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zs4dp" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.486071 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c2hv"] Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.486969 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.489179 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.489561 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.490557 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.490819 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.490972 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.491279 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.491327 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 02 11:28:30 crc kubenswrapper[4725]: W1002 11:28:30.496092 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15fc62f2_0a7e_477c_8e35_0888c40e2d6c.slice/crio-126e3831d2ebf770cf7b751d93fbf73ec9331b7e002f2ecbd96e33cdc884b69d WatchSource:0}: Error finding container 126e3831d2ebf770cf7b751d93fbf73ec9331b7e002f2ecbd96e33cdc884b69d: Status 404 returned error can't find the container with id 126e3831d2ebf770cf7b751d93fbf73ec9331b7e002f2ecbd96e33cdc884b69d Oct 02 11:28:30 crc kubenswrapper[4725]: W1002 11:28:30.498458 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda370297b_12f2_48c3_9097_f8727e57baa1.slice/crio-cc5b87834de80ed42f46bd0cdb0edba6fdf856773e418b603a87b55ee6442533 WatchSource:0}: Error finding container cc5b87834de80ed42f46bd0cdb0edba6fdf856773e418b603a87b55ee6442533: Status 404 returned error can't find the container with id cc5b87834de80ed42f46bd0cdb0edba6fdf856773e418b603a87b55ee6442533 Oct 02 11:28:30 crc kubenswrapper[4725]: W1002 11:28:30.501623 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e9bad7c_78f8_435d_8449_7c5b04a16869.slice/crio-f35f1f900c8bdc15418dfcad3e6407ef7ad1989351b1cba3665fdf47f9760349 WatchSource:0}: Error finding container f35f1f900c8bdc15418dfcad3e6407ef7ad1989351b1cba3665fdf47f9760349: Status 404 returned error can't find the container with id f35f1f900c8bdc15418dfcad3e6407ef7ad1989351b1cba3665fdf47f9760349 Oct 02 11:28:30 crc kubenswrapper[4725]: W1002 11:28:30.503358 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05ab4d5f_f28b_40a8_af40_baa85450dec4.slice/crio-ed83e23a6d4668ee1467e89ef26b33605ae2877cccd8aa5222cef97e5e5220f4 WatchSource:0}: Error finding container ed83e23a6d4668ee1467e89ef26b33605ae2877cccd8aa5222cef97e5e5220f4: Status 404 returned error can't find the container with id ed83e23a6d4668ee1467e89ef26b33605ae2877cccd8aa5222cef97e5e5220f4 Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.508827 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.523565 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.536014 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.551584 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.562907 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:11Z\\\",\\\"message\\\":\\\"W1002 11:28:11.173672 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1002 11:28:11.173936 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759404491 cert, and key in /tmp/serving-cert-1279338611/serving-signer.crt, /tmp/serving-cert-1279338611/serving-signer.key\\\\nI1002 11:28:11.483292 1 observer_polling.go:159] Starting file observer\\\\nW1002 11:28:11.491714 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 11:28:11.491998 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:11.501496 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1279338611/tls.crt::/tmp/serving-cert-1279338611/tls.key\\\\\\\"\\\\nF1002 11:28:11.659485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.574940 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575443 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovn-node-metrics-cert\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575477 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-script-lib\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575501 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-etc-openvswitch\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575523 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-var-lib-openvswitch\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575546 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-node-log\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575586 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-config\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575619 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcrdd\" (UniqueName: \"kubernetes.io/projected/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-kube-api-access-bcrdd\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575640 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-slash\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575662 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-openvswitch\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575686 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-log-socket\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575742 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-env-overrides\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575766 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-ovn\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575785 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575851 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-netd\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575900 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-netns\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575923 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-bin\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.575970 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-kubelet\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.576001 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-systemd\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.576085 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.577094 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-systemd-units\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.584019 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.595739 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.612497 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.625003 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.640066 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.658297 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.677697 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.677776 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-systemd-units\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.677816 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovn-node-metrics-cert\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.677837 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-script-lib\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.677861 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-etc-openvswitch\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.677882 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-var-lib-openvswitch\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.677906 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-node-log\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.677929 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-config\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.677968 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcrdd\" (UniqueName: \"kubernetes.io/projected/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-kube-api-access-bcrdd\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.677987 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-slash\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678009 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-openvswitch\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678031 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-log-socket\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678052 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-env-overrides\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678070 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-ovn\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678089 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678109 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-netd\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678129 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-netns\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678147 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-bin\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678175 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-kubelet\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678195 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-systemd\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678255 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-systemd\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678333 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678363 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-systemd-units\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678389 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678558 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-openvswitch\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678598 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-log-socket\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678565 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678884 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-netd\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678918 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-netns\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678943 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-bin\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.678970 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-kubelet\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.679000 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-ovn\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.679027 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-node-log\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.679051 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-etc-openvswitch\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.679080 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-var-lib-openvswitch\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.679278 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-env-overrides\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.679403 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-slash\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.679832 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-config\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.679914 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.680346 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-script-lib\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.687112 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovn-node-metrics-cert\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.692228 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a" exitCode=255 Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.692300 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.692389 4725 scope.go:117] "RemoveContainer" containerID="f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a" Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.703848 4725 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.704072 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcrdd\" (UniqueName: \"kubernetes.io/projected/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-kube-api-access-bcrdd\") pod \"ovnkube-node-2c2hv\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.704214 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerStarted","Data":"cc5b87834de80ed42f46bd0cdb0edba6fdf856773e418b603a87b55ee6442533"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.704551 4725 scope.go:117] "RemoveContainer" containerID="eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a" Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.704763 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.706892 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"57030eb32f9a1b4ad5910d2dab8677084961705f68da0d44a2bc73ceab832952"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.712166 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.713552 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zs4dp" event={"ID":"05ab4d5f-f28b-40a8-af40-baa85450dec4","Type":"ContainerStarted","Data":"ed83e23a6d4668ee1467e89ef26b33605ae2877cccd8aa5222cef97e5e5220f4"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.721492 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2q2jl" event={"ID":"15fc62f2-0a7e-477c-8e35-0888c40e2d6c","Type":"ContainerStarted","Data":"3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.721543 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2q2jl" event={"ID":"15fc62f2-0a7e-477c-8e35-0888c40e2d6c","Type":"ContainerStarted","Data":"126e3831d2ebf770cf7b751d93fbf73ec9331b7e002f2ecbd96e33cdc884b69d"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.725570 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.725627 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"f35f1f900c8bdc15418dfcad3e6407ef7ad1989351b1cba3665fdf47f9760349"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.733021 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:11Z\\\",\\\"message\\\":\\\"W1002 11:28:11.173672 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1002 11:28:11.173936 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759404491 cert, and key in /tmp/serving-cert-1279338611/serving-signer.crt, /tmp/serving-cert-1279338611/serving-signer.key\\\\nI1002 11:28:11.483292 1 observer_polling.go:159] Starting file observer\\\\nW1002 11:28:11.491714 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 11:28:11.491998 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:11.501496 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1279338611/tls.crt::/tmp/serving-cert-1279338611/tls.key\\\\\\\"\\\\nF1002 11:28:11.659485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.733767 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.733816 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"68be23c89973600c85fb5f3afc03fea6401b49bf76fcbdfe914602b29ccbe782"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.743758 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.743829 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.743844 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"646004a639a4e64da9b4bf5d440de241cd9f13a997a1d32c93c95f8ef756a971"} Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.750296 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.752756 4725 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.762422 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.779799 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.780061 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:28:32.780017082 +0000 UTC m=+32.687516575 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.787856 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.814785 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:30Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.816971 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.830030 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:30Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:30 crc kubenswrapper[4725]: W1002 11:28:30.830993 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6cd2823_e7fc_454e_9ec2_e3dcc81472e2.slice/crio-6fa823a6e770e3dfa08d4bb4e5fc93b51d7e315727f6038522dedf28ea806a42 WatchSource:0}: Error finding container 6fa823a6e770e3dfa08d4bb4e5fc93b51d7e315727f6038522dedf28ea806a42: Status 404 returned error can't find the container with id 6fa823a6e770e3dfa08d4bb4e5fc93b51d7e315727f6038522dedf28ea806a42 Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.846735 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:30Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.862505 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:30Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.881319 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.881398 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.881436 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.881469 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881533 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881577 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881592 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881618 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881653 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:32.881634254 +0000 UTC m=+32.789133717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881696 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:32.881672235 +0000 UTC m=+32.789171698 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881534 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881699 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881744 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881854 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881855 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:32.881827809 +0000 UTC m=+32.789327272 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:30 crc kubenswrapper[4725]: E1002 11:28:30.881899 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:32.881886291 +0000 UTC m=+32.789385964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.890168 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:30Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.919127 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:30Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.935633 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:30Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.954875 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:30Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.968631 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:30Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:30 crc kubenswrapper[4725]: I1002 11:28:30.985682 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:30Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.028547 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.069816 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.103430 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.104848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.104884 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.104895 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.104992 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.109269 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.153597 4725 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.153882 4725 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.159170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.159223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.159235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.159255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.159268 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: E1002 11:28:31.181289 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.186646 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.193516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.193560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.193570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.193586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.193598 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: E1002 11:28:31.213094 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.217953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.218004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.218016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.218034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.218049 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.233255 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:11Z\\\",\\\"message\\\":\\\"W1002 11:28:11.173672 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1002 11:28:11.173936 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759404491 cert, and key in /tmp/serving-cert-1279338611/serving-signer.crt, /tmp/serving-cert-1279338611/serving-signer.key\\\\nI1002 11:28:11.483292 1 observer_polling.go:159] Starting file observer\\\\nW1002 11:28:11.491714 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 11:28:11.491998 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:11.501496 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1279338611/tls.crt::/tmp/serving-cert-1279338611/tls.key\\\\\\\"\\\\nF1002 11:28:11.659485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: E1002 11:28:31.236893 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.242881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.242927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.242938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.242956 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.242967 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: E1002 11:28:31.256088 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.259134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.259174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.259185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.259200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.259210 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.260171 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.267489 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:31 crc kubenswrapper[4725]: E1002 11:28:31.267653 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:31 crc kubenswrapper[4725]: E1002 11:28:31.272469 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: E1002 11:28:31.272636 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.277559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.277604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.277617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.277636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.277648 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.278109 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.303109 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.340322 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.381260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.381300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.381314 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.381332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.381345 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.382734 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.423091 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.461201 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.484254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.484309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.484323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.484343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.484357 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.504769 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.507949 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.540510 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.580932 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.586549 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.586609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.586627 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.586650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.586666 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.620569 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.660351 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.689043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.689082 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.689096 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.689114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.689126 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.700057 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.744677 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8c9bee39acae774dc95345a9e1c20e79dd60f902067b500dfd972332d7d3b7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:11Z\\\",\\\"message\\\":\\\"W1002 11:28:11.173672 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1002 11:28:11.173936 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759404491 cert, and key in /tmp/serving-cert-1279338611/serving-signer.crt, /tmp/serving-cert-1279338611/serving-signer.key\\\\nI1002 11:28:11.483292 1 observer_polling.go:159] Starting file observer\\\\nW1002 11:28:11.491714 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 11:28:11.491998 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:11.501496 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1279338611/tls.crt::/tmp/serving-cert-1279338611/tls.key\\\\\\\"\\\\nF1002 11:28:11.659485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.747480 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39" exitCode=0 Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.747572 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.747624 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"6fa823a6e770e3dfa08d4bb4e5fc93b51d7e315727f6038522dedf28ea806a42"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.749896 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.753081 4725 scope.go:117] "RemoveContainer" containerID="eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a" Oct 02 11:28:31 crc kubenswrapper[4725]: E1002 11:28:31.753238 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.754506 4725 generic.go:334] "Generic (PLEG): container finished" podID="a370297b-12f2-48c3-9097-f8727e57baa1" containerID="7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7" exitCode=0 Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.754597 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerDied","Data":"7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.766325 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zs4dp" event={"ID":"05ab4d5f-f28b-40a8-af40-baa85450dec4","Type":"ContainerStarted","Data":"67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.768832 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.779430 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.798117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.798205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.798215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.798231 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.798244 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.825827 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.861946 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.901508 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.904025 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.904070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.904082 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.904099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.904110 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:31Z","lastTransitionTime":"2025-10-02T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.945249 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:31 crc kubenswrapper[4725]: I1002 11:28:31.981339 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.006494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.006537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.006547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.006562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.006572 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:32Z","lastTransitionTime":"2025-10-02T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.025736 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.061434 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.109177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.109218 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.109228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.109245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.109256 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:32Z","lastTransitionTime":"2025-10-02T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.111847 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.143850 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.181274 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.212013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.212052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.212061 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.212076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.212088 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:32Z","lastTransitionTime":"2025-10-02T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.221757 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.260303 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.267490 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.267614 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.267506 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.267712 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.299924 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.313590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.313666 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.313676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.313691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.313701 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:32Z","lastTransitionTime":"2025-10-02T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.339132 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.393592 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.415863 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.416108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.416116 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.416130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.416138 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:32Z","lastTransitionTime":"2025-10-02T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.423272 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.462990 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.499888 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.519304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.519358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.519367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.519387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.519397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:32Z","lastTransitionTime":"2025-10-02T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.547533 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.622026 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.622078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.622091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.622109 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.622123 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:32Z","lastTransitionTime":"2025-10-02T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.724986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.725028 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.725036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.725052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.725061 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:32Z","lastTransitionTime":"2025-10-02T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.773128 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.775986 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.776034 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.776051 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.777996 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerStarted","Data":"b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.778861 4725 scope.go:117] "RemoveContainer" containerID="eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a" Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.779060 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.786544 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.806697 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.807933 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.808117 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:28:36.80808791 +0000 UTC m=+36.715587373 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.825495 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.827119 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.827164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.827173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.827187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.827197 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:32Z","lastTransitionTime":"2025-10-02T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.839550 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.854911 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.867200 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.880228 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.893746 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.905040 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.909426 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.909482 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.909508 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.909525 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.909621 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.909687 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.909740 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.909757 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.909697 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:36.909677893 +0000 UTC m=+36.817177356 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.909688 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.909867 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:36.909838057 +0000 UTC m=+36.817337530 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.909874 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.909625 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.909891 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.910053 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:36.910010852 +0000 UTC m=+36.817510325 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:32 crc kubenswrapper[4725]: E1002 11:28:32.910079 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:36.910069583 +0000 UTC m=+36.817569066 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.929563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.929625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.929638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.929656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.929669 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:32Z","lastTransitionTime":"2025-10-02T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.943524 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:32 crc kubenswrapper[4725]: I1002 11:28:32.981079 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.020607 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.031237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.031267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.031275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.031288 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.031297 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:33Z","lastTransitionTime":"2025-10-02T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.062609 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.100924 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.133498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.133551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.133567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.133590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.133612 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:33Z","lastTransitionTime":"2025-10-02T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.142856 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.181573 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.221141 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.235970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.236006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.236015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.236032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.236042 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:33Z","lastTransitionTime":"2025-10-02T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.259951 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.267402 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:33 crc kubenswrapper[4725]: E1002 11:28:33.267522 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.300623 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.338465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.338500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.338510 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.338527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.338538 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:33Z","lastTransitionTime":"2025-10-02T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.340263 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.386261 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.422137 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.441133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.441175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.441185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.441203 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.441219 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:33Z","lastTransitionTime":"2025-10-02T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.460290 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.503948 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.542943 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.544136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.544184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.544195 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.544211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.544222 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:33Z","lastTransitionTime":"2025-10-02T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.587042 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.646322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.646356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.646366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.646382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.646392 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:33Z","lastTransitionTime":"2025-10-02T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.749352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.749386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.749397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.749413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.749425 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:33Z","lastTransitionTime":"2025-10-02T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.782176 4725 generic.go:334] "Generic (PLEG): container finished" podID="a370297b-12f2-48c3-9097-f8727e57baa1" containerID="b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79" exitCode=0 Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.782227 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerDied","Data":"b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.787737 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.787809 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.787824 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.798934 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.811510 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.824140 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.833497 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.845499 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.851307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.851338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.851347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.851360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.851369 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:33Z","lastTransitionTime":"2025-10-02T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.858740 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.870011 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.901604 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.944107 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.954716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.954766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.954775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.954789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.954800 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:33Z","lastTransitionTime":"2025-10-02T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:33 crc kubenswrapper[4725]: I1002 11:28:33.980540 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:33Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.021180 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.057370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.057409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.057419 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.057432 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.057441 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:34Z","lastTransitionTime":"2025-10-02T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.060385 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.105685 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.159954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.160002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.160013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.160029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.160043 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:34Z","lastTransitionTime":"2025-10-02T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.263276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.263352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.263370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.263395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.263412 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:34Z","lastTransitionTime":"2025-10-02T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.267610 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.267635 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:34 crc kubenswrapper[4725]: E1002 11:28:34.267805 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:34 crc kubenswrapper[4725]: E1002 11:28:34.268014 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.365911 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.365960 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.365975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.365998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.366013 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:34Z","lastTransitionTime":"2025-10-02T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.469037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.469077 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.469087 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.469100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.469109 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:34Z","lastTransitionTime":"2025-10-02T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.571836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.572136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.572146 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.572159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.572168 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:34Z","lastTransitionTime":"2025-10-02T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.675492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.675548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.675560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.675581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.675593 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:34Z","lastTransitionTime":"2025-10-02T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.778237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.778295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.778322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.778343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.778358 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:34Z","lastTransitionTime":"2025-10-02T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.794049 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerStarted","Data":"965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490"} Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.811977 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.830011 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.845262 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.858830 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.872384 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.880439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.880466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.880475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.880488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.880497 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:34Z","lastTransitionTime":"2025-10-02T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.887978 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.898435 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.909674 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.923560 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.934517 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.947186 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.963345 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.974572 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:34Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.983386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.983432 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.983442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.983456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:34 crc kubenswrapper[4725]: I1002 11:28:34.983466 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:34Z","lastTransitionTime":"2025-10-02T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.086097 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.086145 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.086157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.086172 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.086196 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:35Z","lastTransitionTime":"2025-10-02T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.188062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.188112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.188128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.188303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.188339 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:35Z","lastTransitionTime":"2025-10-02T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.267222 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:35 crc kubenswrapper[4725]: E1002 11:28:35.267403 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.291018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.291067 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.291076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.291091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.291102 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:35Z","lastTransitionTime":"2025-10-02T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.393109 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.393145 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.393155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.393171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.393181 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:35Z","lastTransitionTime":"2025-10-02T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.496031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.496070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.496083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.496102 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.496114 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:35Z","lastTransitionTime":"2025-10-02T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.599189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.599235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.599247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.599265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.599276 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:35Z","lastTransitionTime":"2025-10-02T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.702291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.702347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.702360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.702379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.702392 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:35Z","lastTransitionTime":"2025-10-02T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.799412 4725 generic.go:334] "Generic (PLEG): container finished" podID="a370297b-12f2-48c3-9097-f8727e57baa1" containerID="965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490" exitCode=0 Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.799473 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerDied","Data":"965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490"} Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.804274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.804329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.804345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.804367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.804382 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:35Z","lastTransitionTime":"2025-10-02T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.818337 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.840963 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.858123 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.870649 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.888308 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.898841 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.906369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.906412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.906427 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.906444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.906458 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:35Z","lastTransitionTime":"2025-10-02T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.913066 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.923693 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.936339 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.947966 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.958570 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.971951 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:35 crc kubenswrapper[4725]: I1002 11:28:35.981372 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:35Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.008920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.008955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.008963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.008976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.008986 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:36Z","lastTransitionTime":"2025-10-02T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.110596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.110629 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.110638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.110652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.110660 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:36Z","lastTransitionTime":"2025-10-02T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.213034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.213077 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.213088 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.213106 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.213118 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:36Z","lastTransitionTime":"2025-10-02T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.267042 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.267085 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.267532 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.267374 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.315904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.315964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.315987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.316015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.316036 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:36Z","lastTransitionTime":"2025-10-02T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.378063 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7n6ff"] Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.379996 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7n6ff" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.383141 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.383293 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.383571 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.383678 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.401703 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.415996 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.417882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.417936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.417947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.417967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.417978 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:36Z","lastTransitionTime":"2025-10-02T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.435196 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.442088 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/045b0cc5-fa2d-4dbe-89eb-80e841e6c947-host\") pod \"node-ca-7n6ff\" (UID: \"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\") " pod="openshift-image-registry/node-ca-7n6ff" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.442172 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hljmx\" (UniqueName: \"kubernetes.io/projected/045b0cc5-fa2d-4dbe-89eb-80e841e6c947-kube-api-access-hljmx\") pod \"node-ca-7n6ff\" (UID: \"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\") " pod="openshift-image-registry/node-ca-7n6ff" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.442302 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/045b0cc5-fa2d-4dbe-89eb-80e841e6c947-serviceca\") pod \"node-ca-7n6ff\" (UID: \"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\") " pod="openshift-image-registry/node-ca-7n6ff" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.452594 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.468240 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.482569 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.495136 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.507547 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.520406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.520478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.520511 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.520544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.520570 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:36Z","lastTransitionTime":"2025-10-02T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.525602 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.538830 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.543475 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/045b0cc5-fa2d-4dbe-89eb-80e841e6c947-host\") pod \"node-ca-7n6ff\" (UID: \"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\") " pod="openshift-image-registry/node-ca-7n6ff" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.543589 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/045b0cc5-fa2d-4dbe-89eb-80e841e6c947-host\") pod \"node-ca-7n6ff\" (UID: \"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\") " pod="openshift-image-registry/node-ca-7n6ff" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.543654 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/045b0cc5-fa2d-4dbe-89eb-80e841e6c947-serviceca\") pod \"node-ca-7n6ff\" (UID: \"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\") " pod="openshift-image-registry/node-ca-7n6ff" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.543672 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hljmx\" (UniqueName: \"kubernetes.io/projected/045b0cc5-fa2d-4dbe-89eb-80e841e6c947-kube-api-access-hljmx\") pod \"node-ca-7n6ff\" (UID: \"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\") " pod="openshift-image-registry/node-ca-7n6ff" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.545771 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/045b0cc5-fa2d-4dbe-89eb-80e841e6c947-serviceca\") pod \"node-ca-7n6ff\" (UID: \"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\") " pod="openshift-image-registry/node-ca-7n6ff" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.557539 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.573135 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hljmx\" (UniqueName: \"kubernetes.io/projected/045b0cc5-fa2d-4dbe-89eb-80e841e6c947-kube-api-access-hljmx\") pod \"node-ca-7n6ff\" (UID: \"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\") " pod="openshift-image-registry/node-ca-7n6ff" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.576922 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.590911 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.610260 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.622091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.622130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.622142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.622158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.622171 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:36Z","lastTransitionTime":"2025-10-02T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.697295 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7n6ff" Oct 02 11:28:36 crc kubenswrapper[4725]: W1002 11:28:36.711560 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod045b0cc5_fa2d_4dbe_89eb_80e841e6c947.slice/crio-447df572a83ef0610da98650c3572de5d6f670cf2e115902f47c98ff528685cb WatchSource:0}: Error finding container 447df572a83ef0610da98650c3572de5d6f670cf2e115902f47c98ff528685cb: Status 404 returned error can't find the container with id 447df572a83ef0610da98650c3572de5d6f670cf2e115902f47c98ff528685cb Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.724950 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.724987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.724996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.725010 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.725018 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:36Z","lastTransitionTime":"2025-10-02T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.804785 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerStarted","Data":"7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.811003 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.812011 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7n6ff" event={"ID":"045b0cc5-fa2d-4dbe-89eb-80e841e6c947","Type":"ContainerStarted","Data":"447df572a83ef0610da98650c3572de5d6f670cf2e115902f47c98ff528685cb"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.818261 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.828152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.828191 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.828203 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.828219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.828231 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:36Z","lastTransitionTime":"2025-10-02T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.832833 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.846252 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.846402 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:28:44.846379166 +0000 UTC m=+44.753878639 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.847548 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.864252 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.874350 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.898254 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.929512 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.931508 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.931528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.931536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.931549 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.931558 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:36Z","lastTransitionTime":"2025-10-02T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.939700 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.947678 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.947710 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.947750 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.947777 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.947844 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.947893 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.947918 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.947922 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:44.947905816 +0000 UTC m=+44.855405279 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.947934 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.947940 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:44.947934027 +0000 UTC m=+44.855433480 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.947944 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.948008 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:44.947995119 +0000 UTC m=+44.855494572 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.948230 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.948249 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.948260 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:36 crc kubenswrapper[4725]: E1002 11:28:36.948301 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:44.948291907 +0000 UTC m=+44.855791370 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.951679 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.963108 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.974132 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.985368 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:36 crc kubenswrapper[4725]: I1002 11:28:36.994752 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:36Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.002518 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.033773 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.033804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.033815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.033830 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.033840 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:37Z","lastTransitionTime":"2025-10-02T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.136125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.136172 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.136184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.136200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.136211 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:37Z","lastTransitionTime":"2025-10-02T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.238697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.238753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.238765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.238780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.238791 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:37Z","lastTransitionTime":"2025-10-02T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.267944 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:37 crc kubenswrapper[4725]: E1002 11:28:37.268213 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.341342 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.341387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.341398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.341415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.341427 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:37Z","lastTransitionTime":"2025-10-02T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.444327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.444375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.444391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.444411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.444427 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:37Z","lastTransitionTime":"2025-10-02T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.546931 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.546971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.546980 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.546995 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.547006 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:37Z","lastTransitionTime":"2025-10-02T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.648923 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.648978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.648997 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.649020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.649035 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:37Z","lastTransitionTime":"2025-10-02T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.750998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.751068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.751083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.751105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.751120 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:37Z","lastTransitionTime":"2025-10-02T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.819333 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.819718 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.820545 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7n6ff" event={"ID":"045b0cc5-fa2d-4dbe-89eb-80e841e6c947","Type":"ContainerStarted","Data":"ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.822763 4725 generic.go:334] "Generic (PLEG): container finished" podID="a370297b-12f2-48c3-9097-f8727e57baa1" containerID="7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423" exitCode=0 Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.822798 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerDied","Data":"7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.841461 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.845343 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.853155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.853208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.853223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.853236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.853245 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:37Z","lastTransitionTime":"2025-10-02T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.863900 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.877349 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.888467 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.900772 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.910928 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.922413 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.938217 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.950546 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.956281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.956319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.956328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.956343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.956353 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:37Z","lastTransitionTime":"2025-10-02T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.966970 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.980286 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:37 crc kubenswrapper[4725]: I1002 11:28:37.992719 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:37Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.005523 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.023124 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.037607 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.050417 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.058841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.058876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.058884 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.058920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.058930 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:38Z","lastTransitionTime":"2025-10-02T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.066303 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.079397 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.090157 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.101217 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.112518 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.124161 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.135745 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.149272 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.159011 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.161340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.161366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.161383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.161400 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.161412 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:38Z","lastTransitionTime":"2025-10-02T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.171874 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.187909 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.198255 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.264567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.264618 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.264631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.264647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.264655 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:38Z","lastTransitionTime":"2025-10-02T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.267242 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:38 crc kubenswrapper[4725]: E1002 11:28:38.267336 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.267240 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:38 crc kubenswrapper[4725]: E1002 11:28:38.267594 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.367614 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.367646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.367653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.367667 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.367677 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:38Z","lastTransitionTime":"2025-10-02T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.470399 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.470459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.470508 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.470526 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.470537 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:38Z","lastTransitionTime":"2025-10-02T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.572987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.573269 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.573282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.573297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.573308 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:38Z","lastTransitionTime":"2025-10-02T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.675617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.675653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.675662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.675676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.675684 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:38Z","lastTransitionTime":"2025-10-02T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.777900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.777951 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.777961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.777979 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.777990 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:38Z","lastTransitionTime":"2025-10-02T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.828890 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerStarted","Data":"c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130"} Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.829003 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.829399 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.842394 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.849918 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.858035 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.867621 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.879549 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.879824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.879851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.879859 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.879873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.879882 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:38Z","lastTransitionTime":"2025-10-02T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.892859 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.905294 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.925979 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.939912 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.951700 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.964132 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.976799 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.981630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.981664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.981675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.981691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.981701 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:38Z","lastTransitionTime":"2025-10-02T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:38 crc kubenswrapper[4725]: I1002 11:28:38.988833 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.001326 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:38Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.010599 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.026460 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.042530 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.056568 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.073489 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.084974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.085018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.085029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.085047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.085061 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:39Z","lastTransitionTime":"2025-10-02T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.089520 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.105110 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.124375 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.137965 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.153361 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.167916 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.183144 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.188492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.188551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.188563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.188579 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.188591 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:39Z","lastTransitionTime":"2025-10-02T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.199228 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.216586 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.229711 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.267318 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:39 crc kubenswrapper[4725]: E1002 11:28:39.267448 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.291525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.291563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.291572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.291586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.291594 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:39Z","lastTransitionTime":"2025-10-02T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.393940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.393993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.394009 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.394030 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.394419 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:39Z","lastTransitionTime":"2025-10-02T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.497908 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.497942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.497952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.497967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.497977 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:39Z","lastTransitionTime":"2025-10-02T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.599884 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.599916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.599926 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.599941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.599952 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:39Z","lastTransitionTime":"2025-10-02T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.702712 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.702769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.702781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.702799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.702810 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:39Z","lastTransitionTime":"2025-10-02T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.804968 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.805012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.805024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.805042 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.805054 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:39Z","lastTransitionTime":"2025-10-02T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.836361 4725 generic.go:334] "Generic (PLEG): container finished" podID="a370297b-12f2-48c3-9097-f8727e57baa1" containerID="c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130" exitCode=0 Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.836521 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.836903 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerDied","Data":"c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130"} Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.858316 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.869688 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.883869 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.896671 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.906758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.906812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.906823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.906837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.906850 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:39Z","lastTransitionTime":"2025-10-02T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.914306 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.928878 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.939968 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.954668 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.967927 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:39 crc kubenswrapper[4725]: I1002 11:28:39.984435 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.000927 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:39Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.008781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.008810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.008819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.008833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.008842 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:40Z","lastTransitionTime":"2025-10-02T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.011980 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.027350 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.046756 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.111439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.111486 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.111498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.111520 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.111531 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:40Z","lastTransitionTime":"2025-10-02T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.214265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.214312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.214322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.214337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.214348 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:40Z","lastTransitionTime":"2025-10-02T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.267219 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.267270 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:40 crc kubenswrapper[4725]: E1002 11:28:40.267360 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:40 crc kubenswrapper[4725]: E1002 11:28:40.267417 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.316173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.316212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.316222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.316236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.316246 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:40Z","lastTransitionTime":"2025-10-02T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.419422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.419462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.419473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.419488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.419499 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:40Z","lastTransitionTime":"2025-10-02T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.521850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.521938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.521950 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.521977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.521990 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:40Z","lastTransitionTime":"2025-10-02T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.624453 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.624780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.624789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.624804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.624814 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:40Z","lastTransitionTime":"2025-10-02T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.735544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.735578 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.735589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.735604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.735617 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:40Z","lastTransitionTime":"2025-10-02T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.837497 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.837547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.837558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.837574 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.837586 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:40Z","lastTransitionTime":"2025-10-02T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.842115 4725 generic.go:334] "Generic (PLEG): container finished" podID="a370297b-12f2-48c3-9097-f8727e57baa1" containerID="8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94" exitCode=0 Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.842203 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerDied","Data":"8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94"} Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.842244 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.860979 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.873616 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.887945 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.901944 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.914531 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.928192 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.940486 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.940557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.940568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.940582 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.940593 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:40Z","lastTransitionTime":"2025-10-02T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.941850 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.956169 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.967331 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.980954 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:40 crc kubenswrapper[4725]: I1002 11:28:40.994255 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:40Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.008842 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.025459 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.038878 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.042786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.042823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.042836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.042855 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.042867 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.145485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.145524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.145532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.145549 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.145561 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.247214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.247246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.247256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.247271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.247280 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.267402 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:41 crc kubenswrapper[4725]: E1002 11:28:41.267531 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.284872 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.297252 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.310021 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.322122 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.335713 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.348604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.348653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.348680 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.348700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.348713 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.349988 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.360591 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: E1002 11:28:41.360841 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.364501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.364549 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.364562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.364579 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.364594 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.373169 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: E1002 11:28:41.377583 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.380208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.380242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.380253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.380268 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.380279 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.385558 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: E1002 11:28:41.394616 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.399248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.399282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.399292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.399306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.399316 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.403179 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: E1002 11:28:41.411354 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.414351 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.414759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.414809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.414823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.414840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.414856 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.426085 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: E1002 11:28:41.427065 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: E1002 11:28:41.427251 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.430408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.430442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.430469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.430485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.430499 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.446579 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.460050 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.534040 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.534076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.534084 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.534101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.534111 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.636887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.637224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.637238 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.637253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.637264 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.739637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.739678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.739686 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.739700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.739710 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.842056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.842134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.842156 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.842207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.842231 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.849422 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" event={"ID":"a370297b-12f2-48c3-9097-f8727e57baa1","Type":"ContainerStarted","Data":"eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.851236 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/0.log" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.854563 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290" exitCode=1 Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.854613 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.855371 4725 scope.go:117] "RemoveContainer" containerID="b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.872923 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.886081 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.900842 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.915217 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.928330 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.941282 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.944562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.944595 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.944604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.944639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.944664 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:41Z","lastTransitionTime":"2025-10-02T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.951743 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.965370 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.978381 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:41 crc kubenswrapper[4725]: I1002 11:28:41.997525 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.013746 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.027260 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.047358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.047414 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.047428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.047483 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.047497 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:42Z","lastTransitionTime":"2025-10-02T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.065526 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.101297 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.121055 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.138540 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468601 5939 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468678 5939 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468759 5939 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.469222 5939 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1002 11:28:41.469274 5939 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:41.469282 5939 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1002 11:28:41.469294 5939 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1002 11:28:41.469311 5939 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:41.469320 5939 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 11:28:41.469333 5939 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:41.469354 5939 factory.go:656] Stopping watch factory\\\\nI1002 11:28:41.469372 5939 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:41.469406 5939 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:41.469417 5939 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.149635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.149677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.149685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.149700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.149708 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:42Z","lastTransitionTime":"2025-10-02T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.155452 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.171499 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.183079 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.198394 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.213022 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.225743 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.236493 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.247977 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.251602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.251632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.251640 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.251652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.251663 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:42Z","lastTransitionTime":"2025-10-02T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.261443 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.267856 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.267856 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:42 crc kubenswrapper[4725]: E1002 11:28:42.267984 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:42 crc kubenswrapper[4725]: E1002 11:28:42.268042 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.272227 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.283830 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.293504 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.354071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.354116 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.354128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.354145 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.354157 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:42Z","lastTransitionTime":"2025-10-02T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.456939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.457001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.457017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.457455 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.457497 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:42Z","lastTransitionTime":"2025-10-02T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.561194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.561238 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.561250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.561268 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.561278 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:42Z","lastTransitionTime":"2025-10-02T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.664370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.664431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.664453 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.664482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.664501 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:42Z","lastTransitionTime":"2025-10-02T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.766942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.767166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.767236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.767311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.767366 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:42Z","lastTransitionTime":"2025-10-02T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.861644 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/0.log" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.866275 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b"} Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.866805 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.869417 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.869456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.869470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.869487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.869499 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:42Z","lastTransitionTime":"2025-10-02T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.887924 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.909202 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.928603 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.953917 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.972533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.972584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.972596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.972618 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.972630 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:42Z","lastTransitionTime":"2025-10-02T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.976669 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:42 crc kubenswrapper[4725]: I1002 11:28:42.995577 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.003973 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s"] Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.005413 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.010142 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.012551 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.027513 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468601 5939 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468678 5939 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468759 5939 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.469222 5939 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1002 11:28:41.469274 5939 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:41.469282 5939 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1002 11:28:41.469294 5939 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1002 11:28:41.469311 5939 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:41.469320 5939 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 11:28:41.469333 5939 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:41.469354 5939 factory.go:656] Stopping watch factory\\\\nI1002 11:28:41.469372 5939 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:41.469406 5939 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:41.469417 5939 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.041991 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.053996 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.066308 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.075193 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.075241 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.075257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.075277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.075292 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:43Z","lastTransitionTime":"2025-10-02T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.082435 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.096606 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.111929 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.114397 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03ae1553-963d-477c-93af-3c54f1b2b261-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.114457 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03ae1553-963d-477c-93af-3c54f1b2b261-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.114531 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfs9s\" (UniqueName: \"kubernetes.io/projected/03ae1553-963d-477c-93af-3c54f1b2b261-kube-api-access-cfs9s\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.114580 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03ae1553-963d-477c-93af-3c54f1b2b261-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.123666 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.138632 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.155294 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.170158 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.177618 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.177653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.177665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.177682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.177694 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:43Z","lastTransitionTime":"2025-10-02T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.185853 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.198662 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.213125 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.215511 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfs9s\" (UniqueName: \"kubernetes.io/projected/03ae1553-963d-477c-93af-3c54f1b2b261-kube-api-access-cfs9s\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.215565 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03ae1553-963d-477c-93af-3c54f1b2b261-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.215605 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03ae1553-963d-477c-93af-3c54f1b2b261-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.215655 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03ae1553-963d-477c-93af-3c54f1b2b261-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.216135 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03ae1553-963d-477c-93af-3c54f1b2b261-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.216336 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03ae1553-963d-477c-93af-3c54f1b2b261-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.221183 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03ae1553-963d-477c-93af-3c54f1b2b261-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.227226 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.235904 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfs9s\" (UniqueName: \"kubernetes.io/projected/03ae1553-963d-477c-93af-3c54f1b2b261-kube-api-access-cfs9s\") pod \"ovnkube-control-plane-749d76644c-2np6s\" (UID: \"03ae1553-963d-477c-93af-3c54f1b2b261\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.244286 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.259585 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.267576 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:43 crc kubenswrapper[4725]: E1002 11:28:43.267792 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.272112 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.280244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.280271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.280279 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.280307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.280317 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:43Z","lastTransitionTime":"2025-10-02T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.287786 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.299370 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.313154 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.327160 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.342574 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468601 5939 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468678 5939 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468759 5939 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.469222 5939 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1002 11:28:41.469274 5939 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:41.469282 5939 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1002 11:28:41.469294 5939 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1002 11:28:41.469311 5939 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:41.469320 5939 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 11:28:41.469333 5939 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:41.469354 5939 factory.go:656] Stopping watch factory\\\\nI1002 11:28:41.469372 5939 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:41.469406 5939 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:41.469417 5939 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: W1002 11:28:43.345465 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03ae1553_963d_477c_93af_3c54f1b2b261.slice/crio-7bc4e86dd06d9ea391ad6ddcc2f364c813ebf8b430527f9c92f4bd0c26d18bc5 WatchSource:0}: Error finding container 7bc4e86dd06d9ea391ad6ddcc2f364c813ebf8b430527f9c92f4bd0c26d18bc5: Status 404 returned error can't find the container with id 7bc4e86dd06d9ea391ad6ddcc2f364c813ebf8b430527f9c92f4bd0c26d18bc5 Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.361013 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:43Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.383670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.383744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.383757 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.383777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.383789 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:43Z","lastTransitionTime":"2025-10-02T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.487417 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.487491 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.487507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.487568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.487587 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:43Z","lastTransitionTime":"2025-10-02T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.590808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.590874 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.590892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.590917 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.590934 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:43Z","lastTransitionTime":"2025-10-02T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.694024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.694361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.694610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.694897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.695347 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:43Z","lastTransitionTime":"2025-10-02T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.802955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.803039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.804226 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.804265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.804289 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:43Z","lastTransitionTime":"2025-10-02T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.870179 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" event={"ID":"03ae1553-963d-477c-93af-3c54f1b2b261","Type":"ContainerStarted","Data":"7bc4e86dd06d9ea391ad6ddcc2f364c813ebf8b430527f9c92f4bd0c26d18bc5"} Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.906683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.906740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.906751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.906768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:43 crc kubenswrapper[4725]: I1002 11:28:43.906781 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:43Z","lastTransitionTime":"2025-10-02T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.010684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.010741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.010753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.010768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.010781 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:44Z","lastTransitionTime":"2025-10-02T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.091009 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-zxhp4"] Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.091443 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:44 crc kubenswrapper[4725]: E1002 11:28:44.091498 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.104086 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.113422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.113447 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.113456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.113468 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.113477 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:44Z","lastTransitionTime":"2025-10-02T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.115711 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.131765 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.147314 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.158175 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.172186 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.182587 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.194906 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.208024 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.216035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.216073 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.216083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.216102 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.216113 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:44Z","lastTransitionTime":"2025-10-02T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.219340 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.226795 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/a6af8c70-d2e8-4891-bf65-1deb3fb02044-kube-api-access-x6xjb\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.226917 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.240695 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.251864 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.263325 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.267017 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:44 crc kubenswrapper[4725]: E1002 11:28:44.267137 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.267238 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:44 crc kubenswrapper[4725]: E1002 11:28:44.267608 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.267970 4725 scope.go:117] "RemoveContainer" containerID="eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.275215 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.295981 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468601 5939 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468678 5939 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468759 5939 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.469222 5939 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1002 11:28:41.469274 5939 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:41.469282 5939 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1002 11:28:41.469294 5939 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1002 11:28:41.469311 5939 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:41.469320 5939 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 11:28:41.469333 5939 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:41.469354 5939 factory.go:656] Stopping watch factory\\\\nI1002 11:28:41.469372 5939 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:41.469406 5939 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:41.469417 5939 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.315459 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:44Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.318278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.318337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.318356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.318379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.318396 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:44Z","lastTransitionTime":"2025-10-02T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.327514 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/a6af8c70-d2e8-4891-bf65-1deb3fb02044-kube-api-access-x6xjb\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.327555 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:44 crc kubenswrapper[4725]: E1002 11:28:44.327662 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:28:44 crc kubenswrapper[4725]: E1002 11:28:44.327698 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs podName:a6af8c70-d2e8-4891-bf65-1deb3fb02044 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:44.827686668 +0000 UTC m=+44.735186131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs") pod "network-metrics-daemon-zxhp4" (UID: "a6af8c70-d2e8-4891-bf65-1deb3fb02044") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.342824 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/a6af8c70-d2e8-4891-bf65-1deb3fb02044-kube-api-access-x6xjb\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.421133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.421170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.421180 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.421194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.421203 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:44Z","lastTransitionTime":"2025-10-02T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.523923 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.523962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.523971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.523984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.523994 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:44Z","lastTransitionTime":"2025-10-02T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.627096 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.627138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.627146 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.627166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.627175 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:44Z","lastTransitionTime":"2025-10-02T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.729744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.729784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.729794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.729809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.729821 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:44Z","lastTransitionTime":"2025-10-02T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.832370 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:44 crc kubenswrapper[4725]: E1002 11:28:44.832509 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:28:44 crc kubenswrapper[4725]: E1002 11:28:44.832552 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs podName:a6af8c70-d2e8-4891-bf65-1deb3fb02044 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:45.832538602 +0000 UTC m=+45.740038065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs") pod "network-metrics-daemon-zxhp4" (UID: "a6af8c70-d2e8-4891-bf65-1deb3fb02044") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.832822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.832904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.832929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.832960 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.832983 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:44Z","lastTransitionTime":"2025-10-02T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.875119 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" event={"ID":"03ae1553-963d-477c-93af-3c54f1b2b261","Type":"ContainerStarted","Data":"83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e"} Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.932838 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:28:44 crc kubenswrapper[4725]: E1002 11:28:44.933112 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:29:00.933081965 +0000 UTC m=+60.840581468 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.936055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.936110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.936127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.936159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:44 crc kubenswrapper[4725]: I1002 11:28:44.936183 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:44Z","lastTransitionTime":"2025-10-02T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.034085 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.034372 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.035947 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:29:01.035918101 +0000 UTC m=+60.943417624 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.035993 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.036084 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.036182 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.036320 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.036420 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:29:01.036396215 +0000 UTC m=+60.943895678 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.036339 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.036460 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.036478 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.036473 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.036524 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 11:29:01.036509638 +0000 UTC m=+60.944009161 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.036528 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.036550 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.036619 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 11:29:01.03659556 +0000 UTC m=+60.944095103 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.038597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.038646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.038658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.038675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.038688 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:45Z","lastTransitionTime":"2025-10-02T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.140856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.140915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.140937 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.140967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.140989 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:45Z","lastTransitionTime":"2025-10-02T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.243112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.243169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.243179 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.243198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.243211 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:45Z","lastTransitionTime":"2025-10-02T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.267630 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.267653 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.267851 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.267947 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.345450 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.345523 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.345544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.345580 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.345598 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:45Z","lastTransitionTime":"2025-10-02T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.447719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.447805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.447824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.447851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.447870 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:45Z","lastTransitionTime":"2025-10-02T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.550631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.550680 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.550694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.550713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.550743 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:45Z","lastTransitionTime":"2025-10-02T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.652741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.652781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.652791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.652805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.652816 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:45Z","lastTransitionTime":"2025-10-02T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.755024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.755062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.755072 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.755088 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.755098 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:45Z","lastTransitionTime":"2025-10-02T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.844449 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.844599 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:28:45 crc kubenswrapper[4725]: E1002 11:28:45.844666 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs podName:a6af8c70-d2e8-4891-bf65-1deb3fb02044 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:47.844649748 +0000 UTC m=+47.752149211 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs") pod "network-metrics-daemon-zxhp4" (UID: "a6af8c70-d2e8-4891-bf65-1deb3fb02044") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.858036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.858093 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.858113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.858135 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.858149 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:45Z","lastTransitionTime":"2025-10-02T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.882407 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.884372 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674"} Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.961578 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.961632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.961670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.961695 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:45 crc kubenswrapper[4725]: I1002 11:28:45.961713 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:45Z","lastTransitionTime":"2025-10-02T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.063970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.064010 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.064033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.064052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.064065 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:46Z","lastTransitionTime":"2025-10-02T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.165908 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.165952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.165967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.165990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.166009 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:46Z","lastTransitionTime":"2025-10-02T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.266971 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.267031 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:46 crc kubenswrapper[4725]: E1002 11:28:46.267377 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:46 crc kubenswrapper[4725]: E1002 11:28:46.267494 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.268345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.268379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.268390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.268407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.268418 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:46Z","lastTransitionTime":"2025-10-02T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.371148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.371216 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.371251 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.371295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.371317 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:46Z","lastTransitionTime":"2025-10-02T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.473595 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.473648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.473662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.473711 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.473747 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:46Z","lastTransitionTime":"2025-10-02T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.577137 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.577226 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.577250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.577277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.577295 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:46Z","lastTransitionTime":"2025-10-02T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.679654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.679710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.679756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.679781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.679798 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:46Z","lastTransitionTime":"2025-10-02T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.782605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.782755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.782780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.782806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.782825 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:46Z","lastTransitionTime":"2025-10-02T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.884900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.884953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.884969 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.884990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.885004 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:46Z","lastTransitionTime":"2025-10-02T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.891538 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/1.log" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.892834 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/0.log" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.898105 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b" exitCode=1 Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.898202 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.898252 4725 scope.go:117] "RemoveContainer" containerID="b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.899463 4725 scope.go:117] "RemoveContainer" containerID="ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b" Oct 02 11:28:46 crc kubenswrapper[4725]: E1002 11:28:46.899847 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.903443 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" event={"ID":"03ae1553-963d-477c-93af-3c54f1b2b261","Type":"ContainerStarted","Data":"e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.904005 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.922706 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:46Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.937223 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:46Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.949474 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:46Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.962868 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:46Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.974854 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:46Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.986860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.986899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.986910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.986925 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.986937 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:46Z","lastTransitionTime":"2025-10-02T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.987381 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:46Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:46 crc kubenswrapper[4725]: I1002 11:28:46.996419 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:46Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.008770 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.018955 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.031051 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.040537 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.051205 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.070982 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.089478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.089531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.089547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.089571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.089588 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:47Z","lastTransitionTime":"2025-10-02T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.090155 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468601 5939 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468678 5939 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468759 5939 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.469222 5939 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1002 11:28:41.469274 5939 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:41.469282 5939 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1002 11:28:41.469294 5939 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1002 11:28:41.469311 5939 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:41.469320 5939 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 11:28:41.469333 5939 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:41.469354 5939 factory.go:656] Stopping watch factory\\\\nI1002 11:28:41.469372 5939 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:41.469406 5939 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:41.469417 5939 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"message\\\":\\\"e/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1002 11:28:45.960852 6185 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:45.960920 6185 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:45.960996 6185 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 11:28:45.961031 6185 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 11:28:45.961654 6185 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:45.962018 6185 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:45.962821 6185 factory.go:656] Stopping watch factory\\\\nI1002 11:28:45.962855 6185 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:45.962882 6185 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 11:28:45.962967 6185 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.107178 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.122401 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.138904 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.152386 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.181157 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468601 5939 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468678 5939 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468759 5939 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.469222 5939 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1002 11:28:41.469274 5939 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:41.469282 5939 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1002 11:28:41.469294 5939 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1002 11:28:41.469311 5939 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:41.469320 5939 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 11:28:41.469333 5939 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:41.469354 5939 factory.go:656] Stopping watch factory\\\\nI1002 11:28:41.469372 5939 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:41.469406 5939 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:41.469417 5939 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"message\\\":\\\"e/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1002 11:28:45.960852 6185 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:45.960920 6185 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:45.960996 6185 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 11:28:45.961031 6185 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 11:28:45.961654 6185 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:45.962018 6185 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:45.962821 6185 factory.go:656] Stopping watch factory\\\\nI1002 11:28:45.962855 6185 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:45.962882 6185 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 11:28:45.962967 6185 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.192632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.192679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.192690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.192708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.192741 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:47Z","lastTransitionTime":"2025-10-02T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.198459 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.214668 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.229874 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.244249 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.256685 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.267621 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.267802 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:47 crc kubenswrapper[4725]: E1002 11:28:47.267838 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:47 crc kubenswrapper[4725]: E1002 11:28:47.268038 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.277767 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.290060 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.294484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.294698 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.294807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.294901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.294968 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:47Z","lastTransitionTime":"2025-10-02T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.304585 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.320709 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.330952 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.342092 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.359842 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.372065 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:47Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.397807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.397857 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.397869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.397886 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.397899 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:47Z","lastTransitionTime":"2025-10-02T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.500114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.500170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.500186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.500207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.500221 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:47Z","lastTransitionTime":"2025-10-02T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.602578 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.602619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.602628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.602642 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.602653 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:47Z","lastTransitionTime":"2025-10-02T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.705139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.705177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.705188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.705203 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.705211 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:47Z","lastTransitionTime":"2025-10-02T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.807568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.807612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.807621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.807636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.807645 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:47Z","lastTransitionTime":"2025-10-02T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.866343 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:47 crc kubenswrapper[4725]: E1002 11:28:47.866508 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:28:47 crc kubenswrapper[4725]: E1002 11:28:47.866604 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs podName:a6af8c70-d2e8-4891-bf65-1deb3fb02044 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:51.866581438 +0000 UTC m=+51.774080961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs") pod "network-metrics-daemon-zxhp4" (UID: "a6af8c70-d2e8-4891-bf65-1deb3fb02044") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.908068 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/1.log" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.909781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.909812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.909822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.909835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:47 crc kubenswrapper[4725]: I1002 11:28:47.909844 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:47Z","lastTransitionTime":"2025-10-02T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.012375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.012413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.012422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.012436 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.012446 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:48Z","lastTransitionTime":"2025-10-02T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.115106 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.115192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.115215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.115245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.115268 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:48Z","lastTransitionTime":"2025-10-02T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.218615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.218711 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.218766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.218795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.218819 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:48Z","lastTransitionTime":"2025-10-02T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.267793 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.267827 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:48 crc kubenswrapper[4725]: E1002 11:28:48.267957 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:48 crc kubenswrapper[4725]: E1002 11:28:48.268080 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.321492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.321535 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.321546 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.321562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.321574 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:48Z","lastTransitionTime":"2025-10-02T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.424669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.424748 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.424763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.424781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.424796 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:48Z","lastTransitionTime":"2025-10-02T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.527168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.527236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.527271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.527297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.527314 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:48Z","lastTransitionTime":"2025-10-02T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.629600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.629633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.629642 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.629655 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.629667 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:48Z","lastTransitionTime":"2025-10-02T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.732328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.732387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.732397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.732411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.732422 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:48Z","lastTransitionTime":"2025-10-02T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.835859 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.835995 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.836024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.836055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.836077 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:48Z","lastTransitionTime":"2025-10-02T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.939215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.939254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.939262 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.939277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:48 crc kubenswrapper[4725]: I1002 11:28:48.939287 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:48Z","lastTransitionTime":"2025-10-02T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.041815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.041876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.041898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.041928 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.041952 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:49Z","lastTransitionTime":"2025-10-02T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.144270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.144319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.144327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.144340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.144348 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:49Z","lastTransitionTime":"2025-10-02T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.246902 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.246958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.246976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.247001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.247018 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:49Z","lastTransitionTime":"2025-10-02T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.267359 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.267445 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:49 crc kubenswrapper[4725]: E1002 11:28:49.267493 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:49 crc kubenswrapper[4725]: E1002 11:28:49.272381 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.350368 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.350459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.350483 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.350517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.350543 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:49Z","lastTransitionTime":"2025-10-02T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.453962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.454381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.454522 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.454667 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.454848 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:49Z","lastTransitionTime":"2025-10-02T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.557609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.557935 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.558061 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.558162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.558262 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:49Z","lastTransitionTime":"2025-10-02T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.660393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.660430 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.660442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.660459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.660472 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:49Z","lastTransitionTime":"2025-10-02T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.763629 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.763676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.763687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.763704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.763714 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:49Z","lastTransitionTime":"2025-10-02T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.866465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.866548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.866576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.866607 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.866632 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:49Z","lastTransitionTime":"2025-10-02T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.969458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.969523 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.969535 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.969551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:49 crc kubenswrapper[4725]: I1002 11:28:49.969563 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:49Z","lastTransitionTime":"2025-10-02T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.071480 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.071542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.071559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.071583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.071602 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:50Z","lastTransitionTime":"2025-10-02T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.174163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.174190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.174200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.174212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.174221 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:50Z","lastTransitionTime":"2025-10-02T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.267914 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.267914 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:50 crc kubenswrapper[4725]: E1002 11:28:50.268113 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:50 crc kubenswrapper[4725]: E1002 11:28:50.268281 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.276523 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.276571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.276589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.276609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.276626 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:50Z","lastTransitionTime":"2025-10-02T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.380135 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.380185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.380196 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.380218 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.380230 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:50Z","lastTransitionTime":"2025-10-02T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.474017 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.484907 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.484955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.484973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.485000 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.485018 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:50Z","lastTransitionTime":"2025-10-02T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.487822 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.490283 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.508093 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.520691 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.531447 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.544444 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.555426 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.569682 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.587384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.587432 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.587444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.587465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.587477 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:50Z","lastTransitionTime":"2025-10-02T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.588536 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.610009 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468601 5939 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468678 5939 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468759 5939 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.469222 5939 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1002 11:28:41.469274 5939 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:41.469282 5939 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1002 11:28:41.469294 5939 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1002 11:28:41.469311 5939 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:41.469320 5939 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 11:28:41.469333 5939 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:41.469354 5939 factory.go:656] Stopping watch factory\\\\nI1002 11:28:41.469372 5939 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:41.469406 5939 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:41.469417 5939 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"message\\\":\\\"e/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1002 11:28:45.960852 6185 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:45.960920 6185 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:45.960996 6185 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 11:28:45.961031 6185 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 11:28:45.961654 6185 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:45.962018 6185 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:45.962821 6185 factory.go:656] Stopping watch factory\\\\nI1002 11:28:45.962855 6185 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:45.962882 6185 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 11:28:45.962967 6185 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.622344 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.633318 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.646901 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.658330 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.671757 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.688294 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.690303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.690355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.690364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.690379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.690394 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:50Z","lastTransitionTime":"2025-10-02T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.702367 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:50Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.792355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.792388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.792396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.792411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.792419 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:50Z","lastTransitionTime":"2025-10-02T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.895366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.895444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.895465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.895489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.895506 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:50Z","lastTransitionTime":"2025-10-02T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.998237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.998270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.998278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.998292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:50 crc kubenswrapper[4725]: I1002 11:28:50.998301 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:50Z","lastTransitionTime":"2025-10-02T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.101201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.101234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.101241 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.101254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.101263 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.204984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.205066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.205080 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.205101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.205114 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.268103 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:51 crc kubenswrapper[4725]: E1002 11:28:51.268259 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.268403 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:51 crc kubenswrapper[4725]: E1002 11:28:51.268624 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.287504 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.299928 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.307611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.307650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.307662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.307678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.307689 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.316184 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.328836 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.342266 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.360412 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.374380 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.386958 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.404909 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.409713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.409783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.409800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.409822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.409840 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.417553 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.429188 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.440563 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.445506 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.445542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.445553 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.445570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.445583 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.452418 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: E1002 11:28:51.456139 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.459893 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.459932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.459945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.459962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.459975 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.463769 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: E1002 11:28:51.471464 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.475838 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.475882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.475896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.475916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.475935 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.476516 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: E1002 11:28:51.489930 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.493842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.493869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.493877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.493891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.493900 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.501607 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b85da8e0ff6fae603bf019e5845b2b244a7fafa27369f0c6f0957d1e204ec290\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"message\\\":\\\"s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468601 5939 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468678 5939 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.468759 5939 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 11:28:41.469222 5939 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1002 11:28:41.469274 5939 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:41.469282 5939 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1002 11:28:41.469294 5939 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1002 11:28:41.469311 5939 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:41.469320 5939 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 11:28:41.469333 5939 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:41.469354 5939 factory.go:656] Stopping watch factory\\\\nI1002 11:28:41.469372 5939 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:41.469406 5939 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:41.469417 5939 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"message\\\":\\\"e/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1002 11:28:45.960852 6185 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:45.960920 6185 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:45.960996 6185 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 11:28:45.961031 6185 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 11:28:45.961654 6185 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:45.962018 6185 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:45.962821 6185 factory.go:656] Stopping watch factory\\\\nI1002 11:28:45.962855 6185 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:45.962882 6185 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 11:28:45.962967 6185 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: E1002 11:28:51.506303 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.511217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.511287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.511300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.511317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.511327 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.514505 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: E1002 11:28:51.523542 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:51 crc kubenswrapper[4725]: E1002 11:28:51.523652 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.525614 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.525637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.525645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.525660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.525669 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.628952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.628980 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.628990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.629018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.629028 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.731917 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.731961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.731974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.731989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.731999 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.835439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.835490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.835500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.835517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.835528 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.910483 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:51 crc kubenswrapper[4725]: E1002 11:28:51.910652 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:28:51 crc kubenswrapper[4725]: E1002 11:28:51.910712 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs podName:a6af8c70-d2e8-4891-bf65-1deb3fb02044 nodeName:}" failed. No retries permitted until 2025-10-02 11:28:59.910696443 +0000 UTC m=+59.818195896 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs") pod "network-metrics-daemon-zxhp4" (UID: "a6af8c70-d2e8-4891-bf65-1deb3fb02044") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.937462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.937491 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.937499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.937513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:51 crc kubenswrapper[4725]: I1002 11:28:51.937522 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:51Z","lastTransitionTime":"2025-10-02T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.040487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.040532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.040543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.040581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.040593 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:52Z","lastTransitionTime":"2025-10-02T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.145048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.145173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.145196 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.145226 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.145250 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:52Z","lastTransitionTime":"2025-10-02T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.248073 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.248116 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.248128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.248147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.248158 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:52Z","lastTransitionTime":"2025-10-02T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.267094 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.267212 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:52 crc kubenswrapper[4725]: E1002 11:28:52.267322 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:52 crc kubenswrapper[4725]: E1002 11:28:52.267452 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.351496 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.351526 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.351536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.351548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.351557 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:52Z","lastTransitionTime":"2025-10-02T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.454234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.454274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.454285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.454302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.454313 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:52Z","lastTransitionTime":"2025-10-02T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.556820 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.556949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.556958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.556972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.556985 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:52Z","lastTransitionTime":"2025-10-02T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.659683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.659794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.659812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.659844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.659864 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:52Z","lastTransitionTime":"2025-10-02T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.762644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.762684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.762701 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.762741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.762781 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:52Z","lastTransitionTime":"2025-10-02T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.864781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.864815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.864825 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.864867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.864878 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:52Z","lastTransitionTime":"2025-10-02T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.967092 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.967169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.967181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.967223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:52 crc kubenswrapper[4725]: I1002 11:28:52.967236 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:52Z","lastTransitionTime":"2025-10-02T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.069777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.070111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.070299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.070452 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.070600 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:53Z","lastTransitionTime":"2025-10-02T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.173501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.173537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.173548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.173562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.173573 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:53Z","lastTransitionTime":"2025-10-02T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.267927 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.268060 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:53 crc kubenswrapper[4725]: E1002 11:28:53.268215 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:53 crc kubenswrapper[4725]: E1002 11:28:53.268361 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.276130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.276182 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.276192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.276210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.276221 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:53Z","lastTransitionTime":"2025-10-02T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.378655 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.378710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.378746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.378771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.378790 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:53Z","lastTransitionTime":"2025-10-02T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.481714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.481780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.481792 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.481812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.481825 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:53Z","lastTransitionTime":"2025-10-02T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.584300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.584337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.584345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.584359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.584367 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:53Z","lastTransitionTime":"2025-10-02T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.687315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.687418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.687433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.687453 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.687465 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:53Z","lastTransitionTime":"2025-10-02T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.790753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.790793 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.790806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.790822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.790832 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:53Z","lastTransitionTime":"2025-10-02T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.893196 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.893245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.893256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.893270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.893280 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:53Z","lastTransitionTime":"2025-10-02T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.996278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.996340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.996353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.996368 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:53 crc kubenswrapper[4725]: I1002 11:28:53.996381 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:53Z","lastTransitionTime":"2025-10-02T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.098671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.098704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.098712 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.098735 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.098744 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:54Z","lastTransitionTime":"2025-10-02T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.201878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.201925 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.201935 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.201953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.201969 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:54Z","lastTransitionTime":"2025-10-02T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.267405 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:54 crc kubenswrapper[4725]: E1002 11:28:54.267611 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.267410 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:54 crc kubenswrapper[4725]: E1002 11:28:54.268042 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.305274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.305333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.305350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.305373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.305392 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:54Z","lastTransitionTime":"2025-10-02T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.409276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.409574 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.409659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.409761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.409889 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:54Z","lastTransitionTime":"2025-10-02T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.513117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.513192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.513210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.513233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.513250 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:54Z","lastTransitionTime":"2025-10-02T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.616826 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.616893 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.616911 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.616933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.616950 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:54Z","lastTransitionTime":"2025-10-02T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.719707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.719823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.719843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.719876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.719902 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:54Z","lastTransitionTime":"2025-10-02T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.823087 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.823137 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.823149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.823167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.823178 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:54Z","lastTransitionTime":"2025-10-02T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.925585 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.925628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.925639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.925655 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:54 crc kubenswrapper[4725]: I1002 11:28:54.925666 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:54Z","lastTransitionTime":"2025-10-02T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.031632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.031662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.031671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.031684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.031693 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:55Z","lastTransitionTime":"2025-10-02T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.133965 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.134053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.134069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.134091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.134101 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:55Z","lastTransitionTime":"2025-10-02T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.235993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.236039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.236050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.236070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.236080 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:55Z","lastTransitionTime":"2025-10-02T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.267431 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:55 crc kubenswrapper[4725]: E1002 11:28:55.267566 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.267431 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:55 crc kubenswrapper[4725]: E1002 11:28:55.267763 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.338350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.338377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.338384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.338418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.338429 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:55Z","lastTransitionTime":"2025-10-02T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.441525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.441565 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.441573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.441587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.441596 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:55Z","lastTransitionTime":"2025-10-02T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.544115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.544157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.544165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.544179 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.544188 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:55Z","lastTransitionTime":"2025-10-02T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.646562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.646599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.646607 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.646620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.646629 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:55Z","lastTransitionTime":"2025-10-02T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.748566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.748615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.748625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.748638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.748647 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:55Z","lastTransitionTime":"2025-10-02T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.851439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.851489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.851501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.851519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.851531 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:55Z","lastTransitionTime":"2025-10-02T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.954142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.954177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.954186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.954200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:55 crc kubenswrapper[4725]: I1002 11:28:55.954209 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:55Z","lastTransitionTime":"2025-10-02T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.056593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.056653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.056667 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.056686 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.056698 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:56Z","lastTransitionTime":"2025-10-02T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.160357 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.160404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.160418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.160437 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.160449 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:56Z","lastTransitionTime":"2025-10-02T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.262561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.262610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.262623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.262639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.262652 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:56Z","lastTransitionTime":"2025-10-02T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.267956 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.268070 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:56 crc kubenswrapper[4725]: E1002 11:28:56.268150 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:56 crc kubenswrapper[4725]: E1002 11:28:56.268241 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.364769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.364810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.364840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.364856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.364866 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:56Z","lastTransitionTime":"2025-10-02T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.467316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.467357 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.467371 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.467386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.467403 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:56Z","lastTransitionTime":"2025-10-02T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.569777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.569824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.569835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.569847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.569856 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:56Z","lastTransitionTime":"2025-10-02T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.672975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.673038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.673060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.673090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.673117 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:56Z","lastTransitionTime":"2025-10-02T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.775471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.775513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.775527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.775543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.775554 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:56Z","lastTransitionTime":"2025-10-02T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.877707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.877784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.877799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.877826 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.877840 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:56Z","lastTransitionTime":"2025-10-02T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.980567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.980624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.980634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.980652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:56 crc kubenswrapper[4725]: I1002 11:28:56.980664 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:56Z","lastTransitionTime":"2025-10-02T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.083290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.083348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.083367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.083391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.083408 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:57Z","lastTransitionTime":"2025-10-02T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.186664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.186935 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.187023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.187140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.187220 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:57Z","lastTransitionTime":"2025-10-02T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.267392 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:57 crc kubenswrapper[4725]: E1002 11:28:57.267575 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.267781 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:57 crc kubenswrapper[4725]: E1002 11:28:57.267971 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.295359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.295439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.295465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.295494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.295516 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:57Z","lastTransitionTime":"2025-10-02T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.398622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.398697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.398729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.398789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.398813 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:57Z","lastTransitionTime":"2025-10-02T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.502398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.502448 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.502476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.502497 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.502507 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:57Z","lastTransitionTime":"2025-10-02T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.605316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.605884 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.605972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.606084 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.606162 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:57Z","lastTransitionTime":"2025-10-02T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.709115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.709183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.709237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.709260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.709278 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:57Z","lastTransitionTime":"2025-10-02T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.812099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.812139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.812152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.812206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.812215 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:57Z","lastTransitionTime":"2025-10-02T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.915347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.915647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.915668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.915693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:57 crc kubenswrapper[4725]: I1002 11:28:57.915709 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:57Z","lastTransitionTime":"2025-10-02T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.018678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.018770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.018791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.018813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.018830 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:58Z","lastTransitionTime":"2025-10-02T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.121441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.121482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.121501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.121522 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.121534 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:58Z","lastTransitionTime":"2025-10-02T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.224196 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.224261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.224282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.224310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.224332 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:58Z","lastTransitionTime":"2025-10-02T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.267458 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:28:58 crc kubenswrapper[4725]: E1002 11:28:58.267600 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.267481 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:28:58 crc kubenswrapper[4725]: E1002 11:28:58.267905 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.328077 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.328130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.328141 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.328160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.328173 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:58Z","lastTransitionTime":"2025-10-02T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.429958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.429991 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.430000 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.430015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.430025 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:58Z","lastTransitionTime":"2025-10-02T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.532684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.532767 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.532780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.532798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.532811 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:58Z","lastTransitionTime":"2025-10-02T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.572117 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.572877 4725 scope.go:117] "RemoveContainer" containerID="ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.587606 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.605573 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"message\\\":\\\"e/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1002 11:28:45.960852 6185 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:45.960920 6185 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:45.960996 6185 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 11:28:45.961031 6185 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 11:28:45.961654 6185 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:45.962018 6185 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:45.962821 6185 factory.go:656] Stopping watch factory\\\\nI1002 11:28:45.962855 6185 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:45.962882 6185 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 11:28:45.962967 6185 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.615110 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.625377 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.635279 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.635312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.635321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.635335 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.635345 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:58Z","lastTransitionTime":"2025-10-02T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.642641 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.655500 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.667493 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.680869 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.695006 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.706158 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.716488 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.728222 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.738032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.738236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.738309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.738405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.738466 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:58Z","lastTransitionTime":"2025-10-02T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.740354 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.750948 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.765549 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.777208 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.787890 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:58Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.841204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.841245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.841256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.841273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.841285 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:58Z","lastTransitionTime":"2025-10-02T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.944077 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.944110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.944118 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.944130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.944139 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:58Z","lastTransitionTime":"2025-10-02T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.949851 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/1.log" Oct 02 11:28:58 crc kubenswrapper[4725]: I1002 11:28:58.952634 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.046950 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.047732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.047863 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.047905 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.047922 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:59Z","lastTransitionTime":"2025-10-02T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.150760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.150804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.150818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.150840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.150855 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:59Z","lastTransitionTime":"2025-10-02T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.253723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.253987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.254059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.254122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.254176 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:59Z","lastTransitionTime":"2025-10-02T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.267077 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.267249 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:28:59 crc kubenswrapper[4725]: E1002 11:28:59.267455 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:28:59 crc kubenswrapper[4725]: E1002 11:28:59.267570 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.356434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.356486 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.356499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.356519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.356531 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:59Z","lastTransitionTime":"2025-10-02T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.459173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.459228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.459245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.459268 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.459284 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:59Z","lastTransitionTime":"2025-10-02T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.561017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.561077 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.561093 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.561112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.561125 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:59Z","lastTransitionTime":"2025-10-02T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.664256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.664500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.664613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.664764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.664846 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:59Z","lastTransitionTime":"2025-10-02T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.768079 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.768402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.768559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.768662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.768785 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:59Z","lastTransitionTime":"2025-10-02T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.871604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.871661 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.871673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.871691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.871703 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:59Z","lastTransitionTime":"2025-10-02T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.957689 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/2.log" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.958928 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/1.log" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.962864 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b" exitCode=1 Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.962921 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.962960 4725 scope.go:117] "RemoveContainer" containerID="ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.964067 4725 scope.go:117] "RemoveContainer" containerID="698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b" Oct 02 11:28:59 crc kubenswrapper[4725]: E1002 11:28:59.964334 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.973682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.973713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.973727 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.973763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.973776 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:28:59Z","lastTransitionTime":"2025-10-02T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.988035 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:28:59Z is after 2025-08-24T17:21:41Z" Oct 02 11:28:59 crc kubenswrapper[4725]: I1002 11:28:59.994276 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:28:59 crc kubenswrapper[4725]: E1002 11:28:59.994455 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:28:59 crc kubenswrapper[4725]: E1002 11:28:59.994559 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs podName:a6af8c70-d2e8-4891-bf65-1deb3fb02044 nodeName:}" failed. No retries permitted until 2025-10-02 11:29:15.994521033 +0000 UTC m=+75.902020506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs") pod "network-metrics-daemon-zxhp4" (UID: "a6af8c70-d2e8-4891-bf65-1deb3fb02044") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.003718 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.016816 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.030969 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.047586 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.066519 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.075962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.076015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.076024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.076037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.076047 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:00Z","lastTransitionTime":"2025-10-02T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.083054 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.092858 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.104414 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.117448 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.131429 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.148525 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.161701 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.172822 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.177977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.178085 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.178095 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.178107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.178115 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:00Z","lastTransitionTime":"2025-10-02T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.192603 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ece80c103931d07936703111ce0332de1e232320dfbd5ec07d3e71fa25e2791b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"message\\\":\\\"e/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1002 11:28:45.960852 6185 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 11:28:45.960920 6185 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 11:28:45.960996 6185 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 11:28:45.961031 6185 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 11:28:45.961654 6185 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 11:28:45.962018 6185 handler.go:208] Removed *v1.Node event handler 2\\\\nI1002 11:28:45.962821 6185 factory.go:656] Stopping watch factory\\\\nI1002 11:28:45.962855 6185 ovnkube.go:599] Stopped ovnkube\\\\nI1002 11:28:45.962882 6185 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 11:28:45.962967 6185 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:59Z\\\",\\\"message\\\":\\\"e:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1002 11:28:59.599367 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.201752 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.211745 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.266996 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.267016 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:00 crc kubenswrapper[4725]: E1002 11:29:00.267163 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:00 crc kubenswrapper[4725]: E1002 11:29:00.267238 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.280424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.280467 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.280480 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.280495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.280506 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:00Z","lastTransitionTime":"2025-10-02T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.383470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.383519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.383533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.383552 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.383567 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:00Z","lastTransitionTime":"2025-10-02T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.485884 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.485926 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.485938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.485954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.485966 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:00Z","lastTransitionTime":"2025-10-02T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.588603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.588634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.588644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.588658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.588667 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:00Z","lastTransitionTime":"2025-10-02T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.690366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.690421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.690433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.690453 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.690466 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:00Z","lastTransitionTime":"2025-10-02T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.792263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.792318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.792330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.792348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.792359 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:00Z","lastTransitionTime":"2025-10-02T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.894817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.894868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.894881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.894899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.894913 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:00Z","lastTransitionTime":"2025-10-02T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.968674 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/2.log" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.972626 4725 scope.go:117] "RemoveContainer" containerID="698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b" Oct 02 11:29:00 crc kubenswrapper[4725]: E1002 11:29:00.973017 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.986953 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:00Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.997473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.997529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.997543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.997562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:00 crc kubenswrapper[4725]: I1002 11:29:00.997574 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:00Z","lastTransitionTime":"2025-10-02T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.004021 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.004223 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:29:33.004195372 +0000 UTC m=+92.911694835 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.007073 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.024934 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:59Z\\\",\\\"message\\\":\\\"e:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1002 11:28:59.599367 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.039222 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.053233 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.063594 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.075396 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.085046 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.099915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.100098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.100199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.100301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.100421 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.105506 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.105550 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.105567 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.105620 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.105718 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.105762 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.105774 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.105819 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 11:29:33.105802765 +0000 UTC m=+93.013302228 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.105971 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.106206 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.106236 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.106272 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.106289 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:29:33.106272827 +0000 UTC m=+93.013772290 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.106291 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.106350 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 11:29:33.106331079 +0000 UTC m=+93.013830582 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.106170 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.106916 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:29:33.106900754 +0000 UTC m=+93.014400227 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.122762 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.132018 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.142593 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.155148 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.165545 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.175850 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.188083 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.200144 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.202802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.203033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.203132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.203476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.203551 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.266961 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.267080 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.267409 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.267479 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.281785 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.292980 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.307234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.307309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.307329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.307639 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.307805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.307841 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.327064 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.338664 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.349557 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.361481 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.377462 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:59Z\\\",\\\"message\\\":\\\"e:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1002 11:28:59.599367 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.387790 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.401278 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.409466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.409498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.409507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.409521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.409532 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.412776 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.423260 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.433327 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.445896 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.455803 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.467130 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.475420 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.511586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.511647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.511664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.511704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.511733 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.526143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.526196 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.526212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.526232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.526249 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.540900 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.544923 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.544982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.544999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.545019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.545035 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.563272 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.566512 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.566560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.566572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.566592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.566606 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.578376 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.582070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.582104 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.582113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.582144 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.582154 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.593703 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.597544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.597584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.597594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.597611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.597624 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.609458 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:01Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:01 crc kubenswrapper[4725]: E1002 11:29:01.609598 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.616285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.616347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.616381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.616402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.616422 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.719108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.719173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.719185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.719204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.719216 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.821507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.822033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.822056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.822085 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.822107 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.925319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.925355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.925365 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.925379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:01 crc kubenswrapper[4725]: I1002 11:29:01.925392 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:01Z","lastTransitionTime":"2025-10-02T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.026939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.026992 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.027001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.027016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.027024 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:02Z","lastTransitionTime":"2025-10-02T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.129706 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.129789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.129799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.129816 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.129826 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:02Z","lastTransitionTime":"2025-10-02T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.232369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.232430 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.232441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.232459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.232472 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:02Z","lastTransitionTime":"2025-10-02T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.267556 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:02 crc kubenswrapper[4725]: E1002 11:29:02.267700 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.268046 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:02 crc kubenswrapper[4725]: E1002 11:29:02.268290 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.334810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.334862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.334874 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.334890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.334901 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:02Z","lastTransitionTime":"2025-10-02T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.437619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.437657 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.437665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.437681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.437690 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:02Z","lastTransitionTime":"2025-10-02T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.540919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.540978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.540994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.541018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.541035 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:02Z","lastTransitionTime":"2025-10-02T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.644066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.644122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.644139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.644162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.644178 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:02Z","lastTransitionTime":"2025-10-02T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.746856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.746891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.746906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.746923 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.746934 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:02Z","lastTransitionTime":"2025-10-02T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.849709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.849782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.849793 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.849809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.849822 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:02Z","lastTransitionTime":"2025-10-02T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.952118 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.952155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.952166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.952185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:02 crc kubenswrapper[4725]: I1002 11:29:02.952198 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:02Z","lastTransitionTime":"2025-10-02T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.055103 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.055144 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.055156 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.055174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.055187 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:03Z","lastTransitionTime":"2025-10-02T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.157500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.157543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.157552 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.157570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.157581 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:03Z","lastTransitionTime":"2025-10-02T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.260642 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.260688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.260697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.260716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.260744 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:03Z","lastTransitionTime":"2025-10-02T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.267928 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.267960 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:03 crc kubenswrapper[4725]: E1002 11:29:03.268052 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:03 crc kubenswrapper[4725]: E1002 11:29:03.268209 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.313449 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.328165 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.343511 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.357863 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.362602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.362643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.362654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.362670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.362682 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:03Z","lastTransitionTime":"2025-10-02T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.371606 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.385270 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.398089 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.427949 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.443116 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.462703 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.465721 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.465771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.465781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.465797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.465812 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:03Z","lastTransitionTime":"2025-10-02T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.482781 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.497992 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.518902 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.529773 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.541827 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.555924 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.569648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.569688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.569699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.569715 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.569756 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:03Z","lastTransitionTime":"2025-10-02T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.575224 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:59Z\\\",\\\"message\\\":\\\"e:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1002 11:28:59.599367 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.586955 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:03Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.672772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.672826 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.672841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.672862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.672878 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:03Z","lastTransitionTime":"2025-10-02T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.775214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.775252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.775260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.775273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.775281 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:03Z","lastTransitionTime":"2025-10-02T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.877502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.877541 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.877553 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.877569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.877580 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:03Z","lastTransitionTime":"2025-10-02T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.979491 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.979536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.979548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.979572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:03 crc kubenswrapper[4725]: I1002 11:29:03.979592 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:03Z","lastTransitionTime":"2025-10-02T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.082411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.082461 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.082477 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.082500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.082517 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:04Z","lastTransitionTime":"2025-10-02T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.184842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.184879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.184891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.184907 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.184920 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:04Z","lastTransitionTime":"2025-10-02T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.267090 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.267109 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:04 crc kubenswrapper[4725]: E1002 11:29:04.267240 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:04 crc kubenswrapper[4725]: E1002 11:29:04.267339 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.286704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.286753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.286763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.286780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.286791 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:04Z","lastTransitionTime":"2025-10-02T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.389356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.389415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.389423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.389437 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.389449 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:04Z","lastTransitionTime":"2025-10-02T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.491494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.491808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.491834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.491858 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.491874 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:04Z","lastTransitionTime":"2025-10-02T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.594683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.594980 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.595000 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.595026 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.595045 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:04Z","lastTransitionTime":"2025-10-02T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.698122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.698186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.698197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.698217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.698232 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:04Z","lastTransitionTime":"2025-10-02T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.801026 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.801168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.801186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.801205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.801227 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:04Z","lastTransitionTime":"2025-10-02T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.903996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.904059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.904069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.904091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:04 crc kubenswrapper[4725]: I1002 11:29:04.904102 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:04Z","lastTransitionTime":"2025-10-02T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.007029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.007124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.007139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.007164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.007182 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:05Z","lastTransitionTime":"2025-10-02T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.109234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.109278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.109292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.109310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.109326 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:05Z","lastTransitionTime":"2025-10-02T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.211036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.211071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.211079 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.211095 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.211105 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:05Z","lastTransitionTime":"2025-10-02T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.267111 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:05 crc kubenswrapper[4725]: E1002 11:29:05.267274 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.267504 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:05 crc kubenswrapper[4725]: E1002 11:29:05.267688 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.313195 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.313242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.313254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.313272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.313287 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:05Z","lastTransitionTime":"2025-10-02T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.416205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.416243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.416254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.416269 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.416280 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:05Z","lastTransitionTime":"2025-10-02T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.519035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.519091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.519104 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.519127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.519140 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:05Z","lastTransitionTime":"2025-10-02T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.622138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.622205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.622218 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.622242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.622255 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:05Z","lastTransitionTime":"2025-10-02T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.724434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.724484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.724499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.724517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.724529 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:05Z","lastTransitionTime":"2025-10-02T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.827157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.827211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.827221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.827237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.827248 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:05Z","lastTransitionTime":"2025-10-02T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.930253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.930304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.930315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.930332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:05 crc kubenswrapper[4725]: I1002 11:29:05.930346 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:05Z","lastTransitionTime":"2025-10-02T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.032921 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.032963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.032977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.032996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.033008 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:06Z","lastTransitionTime":"2025-10-02T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.135431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.135470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.135479 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.135493 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.135504 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:06Z","lastTransitionTime":"2025-10-02T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.237975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.238016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.238025 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.238039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.238051 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:06Z","lastTransitionTime":"2025-10-02T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.266967 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.267029 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:06 crc kubenswrapper[4725]: E1002 11:29:06.267129 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:06 crc kubenswrapper[4725]: E1002 11:29:06.267251 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.340183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.340213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.340222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.340235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.340245 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:06Z","lastTransitionTime":"2025-10-02T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.443242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.443293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.443310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.443328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.443343 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:06Z","lastTransitionTime":"2025-10-02T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.545743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.545794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.545805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.545824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.545835 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:06Z","lastTransitionTime":"2025-10-02T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.648188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.648237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.648248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.648266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.648281 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:06Z","lastTransitionTime":"2025-10-02T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.750911 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.751182 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.751258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.751340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.751407 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:06Z","lastTransitionTime":"2025-10-02T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.854246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.854291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.854302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.854318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.854329 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:06Z","lastTransitionTime":"2025-10-02T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.956353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.956394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.956407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.956424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:06 crc kubenswrapper[4725]: I1002 11:29:06.956436 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:06Z","lastTransitionTime":"2025-10-02T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.059117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.059144 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.059152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.059165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.059173 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:07Z","lastTransitionTime":"2025-10-02T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.161579 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.161623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.161635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.161654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.161670 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:07Z","lastTransitionTime":"2025-10-02T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.264269 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.264317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.264331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.264347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.264360 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:07Z","lastTransitionTime":"2025-10-02T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.267849 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:07 crc kubenswrapper[4725]: E1002 11:29:07.267998 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.267846 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:07 crc kubenswrapper[4725]: E1002 11:29:07.268099 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.366497 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.366545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.366555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.366575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.366586 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:07Z","lastTransitionTime":"2025-10-02T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.469052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.469096 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.469108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.469129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.469142 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:07Z","lastTransitionTime":"2025-10-02T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.571505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.571547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.571558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.571573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.571584 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:07Z","lastTransitionTime":"2025-10-02T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.674298 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.674355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.674367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.674384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.674400 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:07Z","lastTransitionTime":"2025-10-02T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.779054 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.779099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.779111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.779128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.779140 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:07Z","lastTransitionTime":"2025-10-02T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.882323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.882385 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.882401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.882427 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.882444 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:07Z","lastTransitionTime":"2025-10-02T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.985037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.985086 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.985099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.985115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:07 crc kubenswrapper[4725]: I1002 11:29:07.985127 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:07Z","lastTransitionTime":"2025-10-02T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.087501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.087530 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.087541 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.087555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.087565 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:08Z","lastTransitionTime":"2025-10-02T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.189832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.189892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.189901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.189916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.189927 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:08Z","lastTransitionTime":"2025-10-02T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.267920 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.267936 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:08 crc kubenswrapper[4725]: E1002 11:29:08.268074 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:08 crc kubenswrapper[4725]: E1002 11:29:08.268193 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.291917 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.291964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.291974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.291986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.291995 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:08Z","lastTransitionTime":"2025-10-02T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.394557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.394600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.394610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.394626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.394636 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:08Z","lastTransitionTime":"2025-10-02T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.496594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.496857 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.496957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.497048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.497134 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:08Z","lastTransitionTime":"2025-10-02T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.599434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.599476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.599485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.599502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.599512 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:08Z","lastTransitionTime":"2025-10-02T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.701949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.702280 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.702291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.702306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.702316 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:08Z","lastTransitionTime":"2025-10-02T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.804076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.804114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.804123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.804138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.804155 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:08Z","lastTransitionTime":"2025-10-02T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.906401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.906438 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.906447 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.906464 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:08 crc kubenswrapper[4725]: I1002 11:29:08.906474 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:08Z","lastTransitionTime":"2025-10-02T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.008316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.008351 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.008359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.008373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.008383 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:09Z","lastTransitionTime":"2025-10-02T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.110973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.111011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.111024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.111040 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.111052 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:09Z","lastTransitionTime":"2025-10-02T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.213677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.213710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.213755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.213776 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.213786 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:09Z","lastTransitionTime":"2025-10-02T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.267433 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.267556 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:09 crc kubenswrapper[4725]: E1002 11:29:09.267693 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:09 crc kubenswrapper[4725]: E1002 11:29:09.267801 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.315941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.315999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.316011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.316036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.316045 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:09Z","lastTransitionTime":"2025-10-02T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.418142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.418192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.418205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.418222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.418239 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:09Z","lastTransitionTime":"2025-10-02T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.520614 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.520647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.520662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.520682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.520691 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:09Z","lastTransitionTime":"2025-10-02T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.622633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.622678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.622690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.622709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.622756 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:09Z","lastTransitionTime":"2025-10-02T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.725356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.725416 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.725428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.725445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.725456 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:09Z","lastTransitionTime":"2025-10-02T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.827957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.827982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.827991 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.828003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.828012 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:09Z","lastTransitionTime":"2025-10-02T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.930123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.930206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.930244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.930276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:09 crc kubenswrapper[4725]: I1002 11:29:09.930316 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:09Z","lastTransitionTime":"2025-10-02T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.032696 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.032756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.032768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.032783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.032795 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:10Z","lastTransitionTime":"2025-10-02T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.135807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.135855 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.135869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.135886 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.135899 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:10Z","lastTransitionTime":"2025-10-02T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.237905 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.237939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.237948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.237962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.237972 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:10Z","lastTransitionTime":"2025-10-02T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.267952 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:10 crc kubenswrapper[4725]: E1002 11:29:10.268090 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.268249 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:10 crc kubenswrapper[4725]: E1002 11:29:10.268296 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.339704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.339751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.339760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.339777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.339792 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:10Z","lastTransitionTime":"2025-10-02T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.442277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.442339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.442350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.442376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.442390 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:10Z","lastTransitionTime":"2025-10-02T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.544940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.544985 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.544995 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.545013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.545024 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:10Z","lastTransitionTime":"2025-10-02T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.647237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.647272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.647279 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.647293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.647302 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:10Z","lastTransitionTime":"2025-10-02T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.749937 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.749978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.749991 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.750007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.750019 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:10Z","lastTransitionTime":"2025-10-02T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.852103 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.852146 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.852158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.852174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.852187 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:10Z","lastTransitionTime":"2025-10-02T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.954910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.954959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.954975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.954996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:10 crc kubenswrapper[4725]: I1002 11:29:10.955011 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:10Z","lastTransitionTime":"2025-10-02T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.057891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.057933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.057941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.057955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.057964 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.163446 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.163488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.163499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.163516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.163527 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.266254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.266305 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.266314 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.266329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.266341 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.271624 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:11 crc kubenswrapper[4725]: E1002 11:29:11.271779 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.271992 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:11 crc kubenswrapper[4725]: E1002 11:29:11.273083 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.287482 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.301814 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.311985 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.323693 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.337652 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.350079 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.359925 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.368658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.368705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.368720 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.368749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.368759 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.371940 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.389425 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:59Z\\\",\\\"message\\\":\\\"e:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1002 11:28:59.599367 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.404695 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.416894 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.428751 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.441092 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.452604 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.466002 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.470360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.470405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.470418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.470435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.470448 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.479824 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.488861 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.572276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.572316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.572328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.572343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.572353 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.674596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.674647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.674657 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.674673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.674683 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.777885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.777929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.777940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.777957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.777972 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.810272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.810506 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.810585 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.810661 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.810769 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: E1002 11:29:11.823427 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.826648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.826774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.826845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.826912 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.826973 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: E1002 11:29:11.843090 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.847277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.847329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.847343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.847358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.847367 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: E1002 11:29:11.859164 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.862668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.862740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.862749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.862761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.862770 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: E1002 11:29:11.873837 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.877059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.877128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.877140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.877161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.877215 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: E1002 11:29:11.892859 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:11Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:11 crc kubenswrapper[4725]: E1002 11:29:11.893388 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.895123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.895190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.895205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.895227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.895245 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.997256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.997307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.997319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.997339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:11 crc kubenswrapper[4725]: I1002 11:29:11.997351 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:11Z","lastTransitionTime":"2025-10-02T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.099472 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.099550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.099564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.099579 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.099611 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:12Z","lastTransitionTime":"2025-10-02T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.202024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.202295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.202381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.202473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.202558 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:12Z","lastTransitionTime":"2025-10-02T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.267410 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:12 crc kubenswrapper[4725]: E1002 11:29:12.267553 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.267841 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:12 crc kubenswrapper[4725]: E1002 11:29:12.267993 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.305049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.305089 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.305099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.305115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.305125 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:12Z","lastTransitionTime":"2025-10-02T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.407445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.407491 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.407502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.407518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.407528 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:12Z","lastTransitionTime":"2025-10-02T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.510038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.510086 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.510097 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.510113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.510124 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:12Z","lastTransitionTime":"2025-10-02T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.611560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.611593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.611605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.611628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.611639 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:12Z","lastTransitionTime":"2025-10-02T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.713632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.713672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.713684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.713699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.713712 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:12Z","lastTransitionTime":"2025-10-02T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.816860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.816913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.816929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.816948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.816964 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:12Z","lastTransitionTime":"2025-10-02T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.919596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.919642 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.919653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.919669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:12 crc kubenswrapper[4725]: I1002 11:29:12.919680 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:12Z","lastTransitionTime":"2025-10-02T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.021321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.021362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.021370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.021382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.021391 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:13Z","lastTransitionTime":"2025-10-02T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.124553 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.124588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.124600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.124616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.124626 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:13Z","lastTransitionTime":"2025-10-02T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.226829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.226869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.226880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.226895 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.226904 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:13Z","lastTransitionTime":"2025-10-02T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.267405 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.267462 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:13 crc kubenswrapper[4725]: E1002 11:29:13.267541 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:13 crc kubenswrapper[4725]: E1002 11:29:13.267632 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.328925 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.328964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.328973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.328987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.328996 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:13Z","lastTransitionTime":"2025-10-02T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.433274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.433323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.433340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.433361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.433379 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:13Z","lastTransitionTime":"2025-10-02T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.536096 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.536129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.536138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.536151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.536159 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:13Z","lastTransitionTime":"2025-10-02T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.638807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.638860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.638870 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.638888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.638900 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:13Z","lastTransitionTime":"2025-10-02T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.741178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.741257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.741270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.741289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.741327 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:13Z","lastTransitionTime":"2025-10-02T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.843533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.843575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.843586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.843599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.843608 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:13Z","lastTransitionTime":"2025-10-02T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.946044 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.946139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.946158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.946179 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:13 crc kubenswrapper[4725]: I1002 11:29:13.946192 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:13Z","lastTransitionTime":"2025-10-02T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.047783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.047842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.047859 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.047881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.047897 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:14Z","lastTransitionTime":"2025-10-02T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.150412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.150476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.150495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.150518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.150535 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:14Z","lastTransitionTime":"2025-10-02T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.252942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.252992 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.253026 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.253048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.253065 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:14Z","lastTransitionTime":"2025-10-02T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.267314 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.267378 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:14 crc kubenswrapper[4725]: E1002 11:29:14.267447 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:14 crc kubenswrapper[4725]: E1002 11:29:14.267530 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.268150 4725 scope.go:117] "RemoveContainer" containerID="698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b" Oct 02 11:29:14 crc kubenswrapper[4725]: E1002 11:29:14.268282 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.355415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.355456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.355465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.355477 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.355487 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:14Z","lastTransitionTime":"2025-10-02T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.458008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.458056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.458070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.458088 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.458103 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:14Z","lastTransitionTime":"2025-10-02T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.560187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.560222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.560230 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.560244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.560255 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:14Z","lastTransitionTime":"2025-10-02T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.663094 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.663136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.663147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.663169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.663180 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:14Z","lastTransitionTime":"2025-10-02T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.765815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.765862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.765871 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.765885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.765898 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:14Z","lastTransitionTime":"2025-10-02T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.868868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.869619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.869662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.869692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.869713 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:14Z","lastTransitionTime":"2025-10-02T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.972251 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.972292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.972304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.972320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:14 crc kubenswrapper[4725]: I1002 11:29:14.972333 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:14Z","lastTransitionTime":"2025-10-02T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.075382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.075452 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.075468 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.075487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.075501 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:15Z","lastTransitionTime":"2025-10-02T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.178133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.178233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.178253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.178276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.178292 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:15Z","lastTransitionTime":"2025-10-02T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.267268 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.267448 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:15 crc kubenswrapper[4725]: E1002 11:29:15.267576 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:15 crc kubenswrapper[4725]: E1002 11:29:15.267840 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.280776 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.280821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.280833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.280849 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.280859 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:15Z","lastTransitionTime":"2025-10-02T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.382994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.383024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.383035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.383051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.383062 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:15Z","lastTransitionTime":"2025-10-02T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.485791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.485858 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.485873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.485896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.485909 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:15Z","lastTransitionTime":"2025-10-02T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.588437 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.588477 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.588488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.588504 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.588515 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:15Z","lastTransitionTime":"2025-10-02T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.691561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.691622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.691639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.691662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.691678 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:15Z","lastTransitionTime":"2025-10-02T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.795211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.795289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.795316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.795344 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.795366 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:15Z","lastTransitionTime":"2025-10-02T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.898672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.898764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.898784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.898809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:15 crc kubenswrapper[4725]: I1002 11:29:15.898836 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:15Z","lastTransitionTime":"2025-10-02T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.002803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.002843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.002854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.002870 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.002881 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:16Z","lastTransitionTime":"2025-10-02T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.059064 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:16 crc kubenswrapper[4725]: E1002 11:29:16.059241 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:29:16 crc kubenswrapper[4725]: E1002 11:29:16.059316 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs podName:a6af8c70-d2e8-4891-bf65-1deb3fb02044 nodeName:}" failed. No retries permitted until 2025-10-02 11:29:48.059300275 +0000 UTC m=+107.966799738 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs") pod "network-metrics-daemon-zxhp4" (UID: "a6af8c70-d2e8-4891-bf65-1deb3fb02044") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.105843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.106090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.106111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.106129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.106141 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:16Z","lastTransitionTime":"2025-10-02T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.210682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.210763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.210775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.210794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.210808 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:16Z","lastTransitionTime":"2025-10-02T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.268072 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:16 crc kubenswrapper[4725]: E1002 11:29:16.268257 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.268076 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:16 crc kubenswrapper[4725]: E1002 11:29:16.268456 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.313304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.313350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.313360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.313376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.313386 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:16Z","lastTransitionTime":"2025-10-02T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.415377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.415422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.415433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.415451 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.415465 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:16Z","lastTransitionTime":"2025-10-02T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.517786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.517832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.517847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.517866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.517881 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:16Z","lastTransitionTime":"2025-10-02T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.620836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.620909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.620932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.620961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.620984 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:16Z","lastTransitionTime":"2025-10-02T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.722772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.722822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.722833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.722847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.722857 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:16Z","lastTransitionTime":"2025-10-02T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.824843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.824884 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.824896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.824914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.824928 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:16Z","lastTransitionTime":"2025-10-02T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.928273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.928330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.928347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.928373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:16 crc kubenswrapper[4725]: I1002 11:29:16.928391 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:16Z","lastTransitionTime":"2025-10-02T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.030839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.030901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.030918 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.030945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.030964 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:17Z","lastTransitionTime":"2025-10-02T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.133785 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.133856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.133871 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.133891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.133907 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:17Z","lastTransitionTime":"2025-10-02T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.238209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.238293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.238318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.238347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.238370 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:17Z","lastTransitionTime":"2025-10-02T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.267842 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.267841 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:17 crc kubenswrapper[4725]: E1002 11:29:17.268093 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:17 crc kubenswrapper[4725]: E1002 11:29:17.268222 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.341596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.341674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.341694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.341755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.341775 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:17Z","lastTransitionTime":"2025-10-02T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.444887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.444948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.444965 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.444993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.445010 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:17Z","lastTransitionTime":"2025-10-02T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.548515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.548575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.548597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.548625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.548647 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:17Z","lastTransitionTime":"2025-10-02T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.651846 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.651909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.651927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.651951 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.651969 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:17Z","lastTransitionTime":"2025-10-02T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.755084 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.755143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.755161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.755182 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.755207 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:17Z","lastTransitionTime":"2025-10-02T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.857070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.857130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.857140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.857152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.857163 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:17Z","lastTransitionTime":"2025-10-02T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.960407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.960481 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.960506 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.960537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:17 crc kubenswrapper[4725]: I1002 11:29:17.960560 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:17Z","lastTransitionTime":"2025-10-02T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.063329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.063364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.063373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.063388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.063397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:18Z","lastTransitionTime":"2025-10-02T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.166362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.166418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.166426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.166440 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.166467 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:18Z","lastTransitionTime":"2025-10-02T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.267594 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.267595 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:18 crc kubenswrapper[4725]: E1002 11:29:18.267772 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:18 crc kubenswrapper[4725]: E1002 11:29:18.267991 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.269569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.269637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.269660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.269682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.269698 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:18Z","lastTransitionTime":"2025-10-02T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.372610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.372685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.372703 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.372743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.372756 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:18Z","lastTransitionTime":"2025-10-02T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.475047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.475080 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.475090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.475105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.475116 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:18Z","lastTransitionTime":"2025-10-02T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.578652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.578719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.578787 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.578816 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.578838 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:18Z","lastTransitionTime":"2025-10-02T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.682415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.682461 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.682473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.682488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.682498 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:18Z","lastTransitionTime":"2025-10-02T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.786035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.786089 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.786106 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.786130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.786146 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:18Z","lastTransitionTime":"2025-10-02T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.889828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.889977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.889996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.890019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.890079 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:18Z","lastTransitionTime":"2025-10-02T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.992256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.992295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.992307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.992323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:18 crc kubenswrapper[4725]: I1002 11:29:18.992338 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:18Z","lastTransitionTime":"2025-10-02T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.022603 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2q2jl_15fc62f2-0a7e-477c-8e35-0888c40e2d6c/kube-multus/0.log" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.022646 4725 generic.go:334] "Generic (PLEG): container finished" podID="15fc62f2-0a7e-477c-8e35-0888c40e2d6c" containerID="3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a" exitCode=1 Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.022671 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2q2jl" event={"ID":"15fc62f2-0a7e-477c-8e35-0888c40e2d6c","Type":"ContainerDied","Data":"3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a"} Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.023020 4725 scope.go:117] "RemoveContainer" containerID="3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.039843 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.059682 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:59Z\\\",\\\"message\\\":\\\"e:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1002 11:28:59.599367 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.071856 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.083239 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.095054 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.095090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.095099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.095113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.095122 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:19Z","lastTransitionTime":"2025-10-02T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.097338 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.108848 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.122014 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:19Z\\\",\\\"message\\\":\\\"2025-10-02T11:28:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f\\\\n2025-10-02T11:28:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f to /host/opt/cni/bin/\\\\n2025-10-02T11:28:33Z [verbose] multus-daemon started\\\\n2025-10-02T11:28:33Z [verbose] Readiness Indicator file check\\\\n2025-10-02T11:29:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.135317 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.145923 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.158063 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.169479 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.183119 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.192922 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.197016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.197147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.197164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.197180 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.197191 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:19Z","lastTransitionTime":"2025-10-02T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.203679 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.216847 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.226986 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.242822 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:19Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.268826 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:19 crc kubenswrapper[4725]: E1002 11:29:19.268937 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.268976 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:19 crc kubenswrapper[4725]: E1002 11:29:19.269023 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.303546 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.303619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.303639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.303775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.303816 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:19Z","lastTransitionTime":"2025-10-02T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.406529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.406600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.406623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.406651 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.406675 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:19Z","lastTransitionTime":"2025-10-02T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.508799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.508849 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.508866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.508887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.508903 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:19Z","lastTransitionTime":"2025-10-02T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.611870 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.611905 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.611915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.611930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.611940 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:19Z","lastTransitionTime":"2025-10-02T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.714318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.714360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.714369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.714383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.714391 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:19Z","lastTransitionTime":"2025-10-02T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.817247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.817312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.817329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.817352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.817372 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:19Z","lastTransitionTime":"2025-10-02T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.919676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.919750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.919764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.919779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:19 crc kubenswrapper[4725]: I1002 11:29:19.919791 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:19Z","lastTransitionTime":"2025-10-02T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.021413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.021449 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.021460 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.021475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.021486 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:20Z","lastTransitionTime":"2025-10-02T11:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.027191 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2q2jl_15fc62f2-0a7e-477c-8e35-0888c40e2d6c/kube-multus/0.log" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.027259 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2q2jl" event={"ID":"15fc62f2-0a7e-477c-8e35-0888c40e2d6c","Type":"ContainerStarted","Data":"81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e"} Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.043563 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.059462 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.076427 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.093769 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.111226 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.124118 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.124234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.124252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.124265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.124273 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:20Z","lastTransitionTime":"2025-10-02T11:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.124933 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.138567 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.156988 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:59Z\\\",\\\"message\\\":\\\"e:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1002 11:28:59.599367 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.168259 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.182276 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.196818 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:19Z\\\",\\\"message\\\":\\\"2025-10-02T11:28:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f\\\\n2025-10-02T11:28:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f to /host/opt/cni/bin/\\\\n2025-10-02T11:28:33Z [verbose] multus-daemon started\\\\n2025-10-02T11:28:33Z [verbose] Readiness Indicator file check\\\\n2025-10-02T11:29:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.211179 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.222622 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.226517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.226550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.226564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.226584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.226597 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:20Z","lastTransitionTime":"2025-10-02T11:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.236637 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.247018 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.258259 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.267530 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.267534 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:20 crc kubenswrapper[4725]: E1002 11:29:20.267642 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:20 crc kubenswrapper[4725]: E1002 11:29:20.267797 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.277100 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:20Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.329431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.329462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.329473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.329487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.329501 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:20Z","lastTransitionTime":"2025-10-02T11:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.432859 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.432897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.432908 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.432924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.432935 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:20Z","lastTransitionTime":"2025-10-02T11:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.535489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.535534 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.535543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.535556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.535568 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:20Z","lastTransitionTime":"2025-10-02T11:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.638441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.638480 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.638491 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.638505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.638514 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:20Z","lastTransitionTime":"2025-10-02T11:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.741373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.741420 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.741431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.741448 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.741463 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:20Z","lastTransitionTime":"2025-10-02T11:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.844369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.844424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.844442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.844463 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.844476 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:20Z","lastTransitionTime":"2025-10-02T11:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.947418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.947472 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.947484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.947502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:20 crc kubenswrapper[4725]: I1002 11:29:20.947514 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:20Z","lastTransitionTime":"2025-10-02T11:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.050281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.050341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.050357 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.050379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.050395 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.153001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.153047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.153060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.153078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.153090 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.255851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.255898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.255908 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.255923 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.255935 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.267521 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:21 crc kubenswrapper[4725]: E1002 11:29:21.267770 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.268126 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:21 crc kubenswrapper[4725]: E1002 11:29:21.268349 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.284077 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.297748 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.316552 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.342687 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.355375 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.358663 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.358687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.358699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.358713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.358747 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.373525 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:19Z\\\",\\\"message\\\":\\\"2025-10-02T11:28:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f\\\\n2025-10-02T11:28:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f to /host/opt/cni/bin/\\\\n2025-10-02T11:28:33Z [verbose] multus-daemon started\\\\n2025-10-02T11:28:33Z [verbose] Readiness Indicator file check\\\\n2025-10-02T11:29:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.398035 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.414360 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.428897 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.446936 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.462769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.462797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.462805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.462818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.462828 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.463190 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.478068 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.496077 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.513859 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.530448 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.545317 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.567018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.567055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.567063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.566815 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:59Z\\\",\\\"message\\\":\\\"e:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1002 11:28:59.599367 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.567078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.567138 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.669577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.669611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.669623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.669637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.669646 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.772551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.773020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.773234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.773443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.773626 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.878021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.878091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.878111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.878138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.878164 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.934644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.934768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.934787 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.934811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.934896 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: E1002 11:29:21.955576 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.961389 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.961612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.961860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.962051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.962208 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: E1002 11:29:21.976495 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.981325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.981362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.981371 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.981385 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:21 crc kubenswrapper[4725]: I1002 11:29:21.981397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:21Z","lastTransitionTime":"2025-10-02T11:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:21 crc kubenswrapper[4725]: E1002 11:29:21.996325 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:21Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.000997 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.001185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.001505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.001865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.002183 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:22 crc kubenswrapper[4725]: E1002 11:29:22.022189 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:22Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.027089 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.027124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.027138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.027156 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.027170 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:22 crc kubenswrapper[4725]: E1002 11:29:22.044069 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:22Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:22 crc kubenswrapper[4725]: E1002 11:29:22.044208 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.046427 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.046492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.046506 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.046529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.046543 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.150512 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.150598 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.150620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.150648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.150665 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.254078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.254136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.254151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.254172 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.254184 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.267516 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.267580 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:22 crc kubenswrapper[4725]: E1002 11:29:22.267717 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:22 crc kubenswrapper[4725]: E1002 11:29:22.267911 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.356900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.357006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.357020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.357036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.357047 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.459507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.459697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.459816 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.459844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.459867 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.562681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.563175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.563355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.563590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.563877 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.665862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.665902 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.665915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.665932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.665945 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.769000 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.769037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.769048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.769064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.769076 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.872075 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.872142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.872156 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.872173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.872184 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.975018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.975078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.975087 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.975121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:22 crc kubenswrapper[4725]: I1002 11:29:22.975133 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:22Z","lastTransitionTime":"2025-10-02T11:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.077986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.078044 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.078058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.078075 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.078089 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:23Z","lastTransitionTime":"2025-10-02T11:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.180412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.180484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.180500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.180522 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.180534 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:23Z","lastTransitionTime":"2025-10-02T11:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.267556 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.267669 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:23 crc kubenswrapper[4725]: E1002 11:29:23.267774 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:23 crc kubenswrapper[4725]: E1002 11:29:23.267937 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.282518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.282549 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.282560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.282573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.282583 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:23Z","lastTransitionTime":"2025-10-02T11:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.385328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.385370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.385381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.385399 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.385411 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:23Z","lastTransitionTime":"2025-10-02T11:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.487531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.487569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.487578 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.487591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.487605 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:23Z","lastTransitionTime":"2025-10-02T11:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.589713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.589988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.590055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.590122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.590202 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:23Z","lastTransitionTime":"2025-10-02T11:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.692496 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.692863 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.693043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.693186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.693455 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:23Z","lastTransitionTime":"2025-10-02T11:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.796475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.796520 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.796529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.796543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.796552 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:23Z","lastTransitionTime":"2025-10-02T11:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.899422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.899489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.899507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.899531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:23 crc kubenswrapper[4725]: I1002 11:29:23.899548 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:23Z","lastTransitionTime":"2025-10-02T11:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.002104 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.002145 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.002155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.002170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.002180 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:24Z","lastTransitionTime":"2025-10-02T11:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.105333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.105419 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.105443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.105472 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.105491 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:24Z","lastTransitionTime":"2025-10-02T11:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.207627 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.207687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.207698 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.207750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.207768 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:24Z","lastTransitionTime":"2025-10-02T11:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.267505 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.267595 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:24 crc kubenswrapper[4725]: E1002 11:29:24.267650 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:24 crc kubenswrapper[4725]: E1002 11:29:24.267797 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.310208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.310264 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.310280 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.310303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.310320 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:24Z","lastTransitionTime":"2025-10-02T11:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.412468 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.412538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.412562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.412590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.412611 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:24Z","lastTransitionTime":"2025-10-02T11:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.515290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.515393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.515419 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.515450 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.515479 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:24Z","lastTransitionTime":"2025-10-02T11:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.618595 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.618661 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.618679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.618760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.618794 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:24Z","lastTransitionTime":"2025-10-02T11:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.721357 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.721400 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.721411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.721425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.721433 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:24Z","lastTransitionTime":"2025-10-02T11:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.824444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.824521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.824545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.824574 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.824595 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:24Z","lastTransitionTime":"2025-10-02T11:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.927474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.927532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.927550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.927576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:24 crc kubenswrapper[4725]: I1002 11:29:24.927593 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:24Z","lastTransitionTime":"2025-10-02T11:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.030286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.030367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.030391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.030421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.030447 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:25Z","lastTransitionTime":"2025-10-02T11:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.133058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.133112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.133130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.133148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.133162 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:25Z","lastTransitionTime":"2025-10-02T11:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.237360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.237407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.237419 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.237435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.237446 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:25Z","lastTransitionTime":"2025-10-02T11:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.267637 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.267655 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:25 crc kubenswrapper[4725]: E1002 11:29:25.268325 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:25 crc kubenswrapper[4725]: E1002 11:29:25.268660 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.268980 4725 scope.go:117] "RemoveContainer" containerID="698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.339528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.339608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.339620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.339636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.339650 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:25Z","lastTransitionTime":"2025-10-02T11:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.442961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.442993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.443002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.443015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.443024 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:25Z","lastTransitionTime":"2025-10-02T11:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.546024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.546056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.546066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.546081 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.546090 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:25Z","lastTransitionTime":"2025-10-02T11:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.648841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.648877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.648888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.648904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.648916 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:25Z","lastTransitionTime":"2025-10-02T11:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.750967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.751012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.751024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.751041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.751051 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:25Z","lastTransitionTime":"2025-10-02T11:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.853682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.853717 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.853745 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.853759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.853768 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:25Z","lastTransitionTime":"2025-10-02T11:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.956702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.956771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.956781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.956798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:25 crc kubenswrapper[4725]: I1002 11:29:25.956809 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:25Z","lastTransitionTime":"2025-10-02T11:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.049951 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/2.log" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.053258 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23"} Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.053956 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.059702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.059998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.060136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.060265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.060388 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:26Z","lastTransitionTime":"2025-10-02T11:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.067508 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.080424 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.090471 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.101694 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:19Z\\\",\\\"message\\\":\\\"2025-10-02T11:28:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f\\\\n2025-10-02T11:28:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f to /host/opt/cni/bin/\\\\n2025-10-02T11:28:33Z [verbose] multus-daemon started\\\\n2025-10-02T11:28:33Z [verbose] Readiness Indicator file check\\\\n2025-10-02T11:29:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.113205 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.124513 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.135922 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.145586 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.154661 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.162263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.162299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.162307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.162321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.162330 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:26Z","lastTransitionTime":"2025-10-02T11:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.165056 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.176060 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.186320 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.196642 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.208281 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.220775 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.239317 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:59Z\\\",\\\"message\\\":\\\"e:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1002 11:28:59.599367 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.249186 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:26Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.264937 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.264996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.265009 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.265024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.265035 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:26Z","lastTransitionTime":"2025-10-02T11:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.267192 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.267203 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:26 crc kubenswrapper[4725]: E1002 11:29:26.267385 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:26 crc kubenswrapper[4725]: E1002 11:29:26.267507 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.278028 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.367270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.367301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.367312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.367327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.367339 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:26Z","lastTransitionTime":"2025-10-02T11:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.470226 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.470320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.470343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.470399 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.470420 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:26Z","lastTransitionTime":"2025-10-02T11:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.573953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.573999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.574015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.574035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.574051 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:26Z","lastTransitionTime":"2025-10-02T11:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.677345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.677413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.677429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.677461 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.677478 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:26Z","lastTransitionTime":"2025-10-02T11:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.779978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.780024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.780037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.780054 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.780065 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:26Z","lastTransitionTime":"2025-10-02T11:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.882977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.883021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.883033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.883047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.883059 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:26Z","lastTransitionTime":"2025-10-02T11:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.986057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.986151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.986201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.986226 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:26 crc kubenswrapper[4725]: I1002 11:29:26.986242 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:26Z","lastTransitionTime":"2025-10-02T11:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.058800 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/3.log" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.059762 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/2.log" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.062279 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" exitCode=1 Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.062343 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23"} Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.062409 4725 scope.go:117] "RemoveContainer" containerID="698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.063201 4725 scope.go:117] "RemoveContainer" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:29:27 crc kubenswrapper[4725]: E1002 11:29:27.063406 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.076442 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.087641 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.088184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.088248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.088261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.088277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.088314 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:27Z","lastTransitionTime":"2025-10-02T11:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.098856 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.110382 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.126196 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.137140 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.149144 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.171060 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://698e404b23388403e24f299fdfc7cc435011d0504e3f8fcfb15cb7c02a6bd96b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:28:59Z\\\",\\\"message\\\":\\\"e:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1002 11:28:59.599367 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:26Z\\\",\\\"message\\\":\\\"ing success event on pod openshift-multus/multus-additional-cni-plugins-8rrpk\\\\nI1002 11:29:26.252926 6817 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1002 11:29:26.252661 6817 services_controller.go:443] Built service openshift-console-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.88\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1002 11:29:26.252942 6817 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1002 11:29:26.252952 6817 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI1002 11:29:26.252964 6817 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-zxhp4] creating logical port openshift-multus_network-metrics-daemon-zxhp4 for pod on switch crc\\\\nF1002 11:29:26.252976 6817 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.186053 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.191265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.191312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.191329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.191351 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.191368 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:27Z","lastTransitionTime":"2025-10-02T11:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.201490 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.213551 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.227234 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:19Z\\\",\\\"message\\\":\\\"2025-10-02T11:28:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f\\\\n2025-10-02T11:28:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f to /host/opt/cni/bin/\\\\n2025-10-02T11:28:33Z [verbose] multus-daemon started\\\\n2025-10-02T11:28:33Z [verbose] Readiness Indicator file check\\\\n2025-10-02T11:29:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.259694 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0357bd28-47bd-4603-8572-5eed1113d81d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ef69dfbc80f03ea89df8bfc959db06fb614b8759b135d11b688bddc446da13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.267377 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:27 crc kubenswrapper[4725]: E1002 11:29:27.267570 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.267700 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:27 crc kubenswrapper[4725]: E1002 11:29:27.267880 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.284093 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.293442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.293476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.293485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.293502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.293513 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:27Z","lastTransitionTime":"2025-10-02T11:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.298998 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.312257 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.322405 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.330995 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:27Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.395944 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.395978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.395988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.396004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.396014 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:27Z","lastTransitionTime":"2025-10-02T11:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.499381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.499418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.499430 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.499459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.499470 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:27Z","lastTransitionTime":"2025-10-02T11:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.602263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.602323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.602339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.602361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.602374 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:27Z","lastTransitionTime":"2025-10-02T11:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.704790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.704827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.704837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.704854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.704866 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:27Z","lastTransitionTime":"2025-10-02T11:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.806619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.806676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.806693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.806713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.806747 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:27Z","lastTransitionTime":"2025-10-02T11:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.909685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.909772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.909789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.909812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:27 crc kubenswrapper[4725]: I1002 11:29:27.909828 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:27Z","lastTransitionTime":"2025-10-02T11:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.012564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.012611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.012623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.012640 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.012651 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:28Z","lastTransitionTime":"2025-10-02T11:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.066905 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/3.log" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.070708 4725 scope.go:117] "RemoveContainer" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:29:28 crc kubenswrapper[4725]: E1002 11:29:28.070902 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.083100 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.096011 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:19Z\\\",\\\"message\\\":\\\"2025-10-02T11:28:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f\\\\n2025-10-02T11:28:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f to /host/opt/cni/bin/\\\\n2025-10-02T11:28:33Z [verbose] multus-daemon started\\\\n2025-10-02T11:28:33Z [verbose] Readiness Indicator file check\\\\n2025-10-02T11:29:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.106307 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0357bd28-47bd-4603-8572-5eed1113d81d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ef69dfbc80f03ea89df8bfc959db06fb614b8759b135d11b688bddc446da13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.114842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.114882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.114891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.114907 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.114917 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:28Z","lastTransitionTime":"2025-10-02T11:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.119207 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.129509 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.142078 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.153477 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.170071 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.179134 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.190004 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.200412 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.213499 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.217178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.217232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.217245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.217263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.217277 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:28Z","lastTransitionTime":"2025-10-02T11:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.227596 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.239556 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.250114 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.260686 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.267639 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.267691 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:28 crc kubenswrapper[4725]: E1002 11:29:28.267804 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:28 crc kubenswrapper[4725]: E1002 11:29:28.267869 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.277324 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:26Z\\\",\\\"message\\\":\\\"ing success event on pod openshift-multus/multus-additional-cni-plugins-8rrpk\\\\nI1002 11:29:26.252926 6817 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1002 11:29:26.252661 6817 services_controller.go:443] Built service openshift-console-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.88\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1002 11:29:26.252942 6817 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1002 11:29:26.252952 6817 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI1002 11:29:26.252964 6817 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-zxhp4] creating logical port openshift-multus_network-metrics-daemon-zxhp4 for pod on switch crc\\\\nF1002 11:29:26.252976 6817 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:29:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.287689 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:28Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.319357 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.319398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.319406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.319420 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.319432 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:28Z","lastTransitionTime":"2025-10-02T11:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.421919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.421976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.421991 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.422008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.422019 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:28Z","lastTransitionTime":"2025-10-02T11:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.524963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.525037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.525053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.525077 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.525094 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:28Z","lastTransitionTime":"2025-10-02T11:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.628042 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.628136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.628149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.628174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.628191 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:28Z","lastTransitionTime":"2025-10-02T11:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.731929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.732000 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.732014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.732047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.732064 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:28Z","lastTransitionTime":"2025-10-02T11:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.839862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.839941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.839951 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.839967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.839978 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:28Z","lastTransitionTime":"2025-10-02T11:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.942328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.942377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.942386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.942400 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:28 crc kubenswrapper[4725]: I1002 11:29:28.942409 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:28Z","lastTransitionTime":"2025-10-02T11:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.044313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.044378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.044399 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.044427 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.044448 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:29Z","lastTransitionTime":"2025-10-02T11:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.146766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.146818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.146831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.146848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.146872 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:29Z","lastTransitionTime":"2025-10-02T11:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.249250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.249289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.249298 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.249311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.249320 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:29Z","lastTransitionTime":"2025-10-02T11:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.267819 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.267880 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:29 crc kubenswrapper[4725]: E1002 11:29:29.267957 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:29 crc kubenswrapper[4725]: E1002 11:29:29.268048 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.352153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.352182 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.352190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.352202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.352210 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:29Z","lastTransitionTime":"2025-10-02T11:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.454571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.454609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.454620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.454634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.454643 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:29Z","lastTransitionTime":"2025-10-02T11:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.556603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.556644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.556656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.556671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.556683 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:29Z","lastTransitionTime":"2025-10-02T11:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.658915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.658961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.658972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.658989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.658999 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:29Z","lastTransitionTime":"2025-10-02T11:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.761621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.761671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.761682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.761699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.761713 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:29Z","lastTransitionTime":"2025-10-02T11:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.865706 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.865812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.865832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.865860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.865882 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:29Z","lastTransitionTime":"2025-10-02T11:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.968613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.968659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.968671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.968689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:29 crc kubenswrapper[4725]: I1002 11:29:29.968699 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:29Z","lastTransitionTime":"2025-10-02T11:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.070999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.071190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.071214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.071236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.071306 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:30Z","lastTransitionTime":"2025-10-02T11:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.173456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.173499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.173508 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.173523 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.173533 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:30Z","lastTransitionTime":"2025-10-02T11:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.267561 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:30 crc kubenswrapper[4725]: E1002 11:29:30.267704 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.267571 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:30 crc kubenswrapper[4725]: E1002 11:29:30.267816 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.276319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.276373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.276387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.276404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.276418 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:30Z","lastTransitionTime":"2025-10-02T11:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.379266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.379335 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.379359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.379389 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.379414 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:30Z","lastTransitionTime":"2025-10-02T11:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.482557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.482617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.482629 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.482646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.482658 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:30Z","lastTransitionTime":"2025-10-02T11:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.586107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.586168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.586181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.586200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.586211 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:30Z","lastTransitionTime":"2025-10-02T11:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.688557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.688644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.688670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.688702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.688762 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:30Z","lastTransitionTime":"2025-10-02T11:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.791625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.791906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.791914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.791927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.791937 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:30Z","lastTransitionTime":"2025-10-02T11:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.895020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.895069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.895082 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.895101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.895119 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:30Z","lastTransitionTime":"2025-10-02T11:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.998338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.998380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.998390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.998404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:30 crc kubenswrapper[4725]: I1002 11:29:30.998414 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:30Z","lastTransitionTime":"2025-10-02T11:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.101648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.101683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.101694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.101710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.101758 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:31Z","lastTransitionTime":"2025-10-02T11:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.204653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.204786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.204804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.204824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.204842 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:31Z","lastTransitionTime":"2025-10-02T11:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.267075 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:31 crc kubenswrapper[4725]: E1002 11:29:31.267267 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.267308 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:31 crc kubenswrapper[4725]: E1002 11:29:31.267532 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.287961 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.304972 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.307464 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.307535 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.307547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.307566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.307577 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:31Z","lastTransitionTime":"2025-10-02T11:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.317678 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.332122 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.344059 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.359067 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.372045 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.391648 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:26Z\\\",\\\"message\\\":\\\"ing success event on pod openshift-multus/multus-additional-cni-plugins-8rrpk\\\\nI1002 11:29:26.252926 6817 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1002 11:29:26.252661 6817 services_controller.go:443] Built service openshift-console-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.88\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1002 11:29:26.252942 6817 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1002 11:29:26.252952 6817 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI1002 11:29:26.252964 6817 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-zxhp4] creating logical port openshift-multus_network-metrics-daemon-zxhp4 for pod on switch crc\\\\nF1002 11:29:26.252976 6817 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:29:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.408686 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.409346 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.409363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.409372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.409384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.409393 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:31Z","lastTransitionTime":"2025-10-02T11:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.421833 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.435936 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.447810 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.459744 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:19Z\\\",\\\"message\\\":\\\"2025-10-02T11:28:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f\\\\n2025-10-02T11:28:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f to /host/opt/cni/bin/\\\\n2025-10-02T11:28:33Z [verbose] multus-daemon started\\\\n2025-10-02T11:28:33Z [verbose] Readiness Indicator file check\\\\n2025-10-02T11:29:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.468935 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0357bd28-47bd-4603-8572-5eed1113d81d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ef69dfbc80f03ea89df8bfc959db06fb614b8759b135d11b688bddc446da13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.479786 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.490864 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.503158 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.512176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.512221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.512233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.512250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.512261 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:31Z","lastTransitionTime":"2025-10-02T11:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.513699 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:31Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.614750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.614819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.614835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.614854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.614866 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:31Z","lastTransitionTime":"2025-10-02T11:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.717151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.717213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.717227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.717248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.717262 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:31Z","lastTransitionTime":"2025-10-02T11:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.819702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.819797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.819813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.819835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.819857 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:31Z","lastTransitionTime":"2025-10-02T11:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.922371 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.922541 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.922556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.922576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:31 crc kubenswrapper[4725]: I1002 11:29:31.922588 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:31Z","lastTransitionTime":"2025-10-02T11:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.025100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.025132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.025140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.025151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.025159 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.073272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.073340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.073354 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.073373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.073385 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: E1002 11:29:32.092388 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.096121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.096186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.096203 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.096232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.096249 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: E1002 11:29:32.115169 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.119706 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.119752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.119783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.119797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.119807 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: E1002 11:29:32.133320 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.136581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.136624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.136634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.136652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.136662 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: E1002 11:29:32.147580 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.151289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.151322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.151330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.151344 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.151354 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: E1002 11:29:32.162091 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:32Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:32 crc kubenswrapper[4725]: E1002 11:29:32.162305 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.164061 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.164093 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.164105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.164121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.164134 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.266204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.266253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.266265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.266282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.266293 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.267037 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.267063 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:32 crc kubenswrapper[4725]: E1002 11:29:32.267182 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:32 crc kubenswrapper[4725]: E1002 11:29:32.267311 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.368252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.368290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.368301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.368317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.368329 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.471478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.471531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.471544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.471561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.471573 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.574840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.574913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.574937 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.574963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.574981 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.678171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.678378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.678413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.678437 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.678455 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.780830 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.780881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.780892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.780909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.780923 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.883664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.883719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.883759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.883780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.883792 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.986280 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.986327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.986340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.986358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:32 crc kubenswrapper[4725]: I1002 11:29:32.986371 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:32Z","lastTransitionTime":"2025-10-02T11:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.043414 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.043606 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.043576099 +0000 UTC m=+156.951075562 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.088195 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.088235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.088246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.088264 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.088276 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:33Z","lastTransitionTime":"2025-10-02T11:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.144763 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.144809 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.144870 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.144901 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.144945 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.144970 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.144980 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.144997 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.144997 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.145007 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.145039 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.145015997 +0000 UTC m=+157.052515480 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.145044 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.145064 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.145052468 +0000 UTC m=+157.052551941 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.145095 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.145079479 +0000 UTC m=+157.052578962 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.145142 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.145236 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.145216383 +0000 UTC m=+157.052715846 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.191030 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.191083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.191103 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.191127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.191144 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:33Z","lastTransitionTime":"2025-10-02T11:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.268045 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.268085 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.268242 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:33 crc kubenswrapper[4725]: E1002 11:29:33.268367 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.293456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.293498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.293511 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.293528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.293541 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:33Z","lastTransitionTime":"2025-10-02T11:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.396716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.396799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.396819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.396851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.396869 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:33Z","lastTransitionTime":"2025-10-02T11:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.500233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.500294 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.500311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.500332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.500350 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:33Z","lastTransitionTime":"2025-10-02T11:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.602326 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.602369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.602382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.602395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.602404 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:33Z","lastTransitionTime":"2025-10-02T11:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.705553 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.705604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.705614 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.705631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.705643 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:33Z","lastTransitionTime":"2025-10-02T11:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.808914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.808974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.808994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.809018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.809036 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:33Z","lastTransitionTime":"2025-10-02T11:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.911248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.911296 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.911305 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.911320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:33 crc kubenswrapper[4725]: I1002 11:29:33.911330 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:33Z","lastTransitionTime":"2025-10-02T11:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.014140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.014184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.014195 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.014208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.014218 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:34Z","lastTransitionTime":"2025-10-02T11:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.116924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.116983 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.116996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.117019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.117038 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:34Z","lastTransitionTime":"2025-10-02T11:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.220580 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.220686 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.220702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.220747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.220765 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:34Z","lastTransitionTime":"2025-10-02T11:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.269823 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:34 crc kubenswrapper[4725]: E1002 11:29:34.270049 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.270381 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:34 crc kubenswrapper[4725]: E1002 11:29:34.270589 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.324360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.324458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.324471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.324495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.324509 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:34Z","lastTransitionTime":"2025-10-02T11:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.427558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.427588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.427598 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.427611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.427619 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:34Z","lastTransitionTime":"2025-10-02T11:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.530155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.530223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.530236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.530259 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.530272 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:34Z","lastTransitionTime":"2025-10-02T11:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.632613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.632664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.632674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.632690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.632701 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:34Z","lastTransitionTime":"2025-10-02T11:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.735006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.735104 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.735121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.735172 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.735185 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:34Z","lastTransitionTime":"2025-10-02T11:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.838131 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.838229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.838248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.838299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.838317 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:34Z","lastTransitionTime":"2025-10-02T11:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.941257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.941356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.941396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.941418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:34 crc kubenswrapper[4725]: I1002 11:29:34.941432 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:34Z","lastTransitionTime":"2025-10-02T11:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.045080 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.045125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.045151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.045168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.045180 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:35Z","lastTransitionTime":"2025-10-02T11:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.148554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.148605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.148616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.148636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.148649 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:35Z","lastTransitionTime":"2025-10-02T11:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.250841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.250895 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.250908 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.250925 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.250936 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:35Z","lastTransitionTime":"2025-10-02T11:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.267336 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:35 crc kubenswrapper[4725]: E1002 11:29:35.267468 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.267557 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:35 crc kubenswrapper[4725]: E1002 11:29:35.267715 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.353040 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.353084 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.353095 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.353110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.353121 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:35Z","lastTransitionTime":"2025-10-02T11:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.455772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.455835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.455844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.455864 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.455880 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:35Z","lastTransitionTime":"2025-10-02T11:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.558006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.558046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.558053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.558067 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.558077 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:35Z","lastTransitionTime":"2025-10-02T11:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.660327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.660363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.660372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.660386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.660397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:35Z","lastTransitionTime":"2025-10-02T11:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.762913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.762988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.762999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.763017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.763028 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:35Z","lastTransitionTime":"2025-10-02T11:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.865651 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.865712 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.865758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.865783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.865801 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:35Z","lastTransitionTime":"2025-10-02T11:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.968408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.968927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.969027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.969133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:35 crc kubenswrapper[4725]: I1002 11:29:35.969232 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:35Z","lastTransitionTime":"2025-10-02T11:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.073022 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.073491 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.073632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.073880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.074025 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:36Z","lastTransitionTime":"2025-10-02T11:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.181113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.181170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.181180 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.181194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.181204 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:36Z","lastTransitionTime":"2025-10-02T11:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.266996 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.267014 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:36 crc kubenswrapper[4725]: E1002 11:29:36.267156 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:36 crc kubenswrapper[4725]: E1002 11:29:36.267211 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.282901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.282940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.282948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.282963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.282971 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:36Z","lastTransitionTime":"2025-10-02T11:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.385820 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.385872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.385897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.385920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.385935 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:36Z","lastTransitionTime":"2025-10-02T11:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.488549 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.488592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.488612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.488629 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.488640 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:36Z","lastTransitionTime":"2025-10-02T11:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.590647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.590696 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.590705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.590733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.590744 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:36Z","lastTransitionTime":"2025-10-02T11:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.693426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.693461 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.693469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.693482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.693491 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:36Z","lastTransitionTime":"2025-10-02T11:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.795675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.795715 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.795743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.795760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.795772 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:36Z","lastTransitionTime":"2025-10-02T11:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.897847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.897877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.897885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.897897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.897905 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:36Z","lastTransitionTime":"2025-10-02T11:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.999821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.999851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.999859 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.999873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:36 crc kubenswrapper[4725]: I1002 11:29:36.999881 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:36Z","lastTransitionTime":"2025-10-02T11:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.102519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.102605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.102618 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.102640 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.102655 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:37Z","lastTransitionTime":"2025-10-02T11:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.205804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.205873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.205891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.205912 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.205931 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:37Z","lastTransitionTime":"2025-10-02T11:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.267218 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.267394 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:37 crc kubenswrapper[4725]: E1002 11:29:37.267515 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:37 crc kubenswrapper[4725]: E1002 11:29:37.267603 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.307832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.307872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.307881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.307901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.307911 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:37Z","lastTransitionTime":"2025-10-02T11:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.410202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.410240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.410249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.410265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.410275 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:37Z","lastTransitionTime":"2025-10-02T11:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.512052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.512093 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.512107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.512121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.512132 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:37Z","lastTransitionTime":"2025-10-02T11:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.614974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.615014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.615024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.615039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.615050 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:37Z","lastTransitionTime":"2025-10-02T11:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.717145 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.717183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.717194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.717209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.717218 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:37Z","lastTransitionTime":"2025-10-02T11:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.820460 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.820531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.820549 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.820572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.820590 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:37Z","lastTransitionTime":"2025-10-02T11:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.923132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.923164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.923172 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.923185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:37 crc kubenswrapper[4725]: I1002 11:29:37.923195 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:37Z","lastTransitionTime":"2025-10-02T11:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.025754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.025795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.025807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.025823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.025832 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:38Z","lastTransitionTime":"2025-10-02T11:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.128456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.128503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.128515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.128534 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.128546 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:38Z","lastTransitionTime":"2025-10-02T11:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.230700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.230810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.230833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.230862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.230883 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:38Z","lastTransitionTime":"2025-10-02T11:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.267473 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.267508 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:38 crc kubenswrapper[4725]: E1002 11:29:38.267879 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:38 crc kubenswrapper[4725]: E1002 11:29:38.268006 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.333760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.333823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.333841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.333861 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.333874 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:38Z","lastTransitionTime":"2025-10-02T11:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.436435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.436479 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.436490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.436504 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.436515 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:38Z","lastTransitionTime":"2025-10-02T11:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.538684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.538746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.538759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.538778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.538790 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:38Z","lastTransitionTime":"2025-10-02T11:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.640623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.640668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.640679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.640694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.640704 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:38Z","lastTransitionTime":"2025-10-02T11:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.743279 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.743322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.743332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.743351 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.743363 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:38Z","lastTransitionTime":"2025-10-02T11:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.845433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.845484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.845497 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.845513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.845522 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:38Z","lastTransitionTime":"2025-10-02T11:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.948003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.948056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.948079 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.948101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:38 crc kubenswrapper[4725]: I1002 11:29:38.948117 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:38Z","lastTransitionTime":"2025-10-02T11:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.050329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.050372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.050387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.050404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.050415 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:39Z","lastTransitionTime":"2025-10-02T11:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.152801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.152848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.152865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.152882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.152892 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:39Z","lastTransitionTime":"2025-10-02T11:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.255028 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.255083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.255095 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.255112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.255125 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:39Z","lastTransitionTime":"2025-10-02T11:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.267315 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.267353 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:39 crc kubenswrapper[4725]: E1002 11:29:39.267504 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:39 crc kubenswrapper[4725]: E1002 11:29:39.267593 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.357641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.357687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.357697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.357712 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.357739 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:39Z","lastTransitionTime":"2025-10-02T11:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.460171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.460222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.460231 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.460245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.460259 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:39Z","lastTransitionTime":"2025-10-02T11:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.562862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.562919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.562946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.562971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.562986 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:39Z","lastTransitionTime":"2025-10-02T11:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.665435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.665493 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.665504 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.665519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.665531 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:39Z","lastTransitionTime":"2025-10-02T11:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.767890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.767945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.767957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.767973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.767985 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:39Z","lastTransitionTime":"2025-10-02T11:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.869845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.869883 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.869892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.869907 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.869918 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:39Z","lastTransitionTime":"2025-10-02T11:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.972784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.972833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.972845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.972861 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:39 crc kubenswrapper[4725]: I1002 11:29:39.972873 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:39Z","lastTransitionTime":"2025-10-02T11:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.075618 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.075666 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.075678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.075693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.075705 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:40Z","lastTransitionTime":"2025-10-02T11:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.177772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.177814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.177822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.177840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.177850 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:40Z","lastTransitionTime":"2025-10-02T11:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.267364 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.267557 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:40 crc kubenswrapper[4725]: E1002 11:29:40.267685 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:40 crc kubenswrapper[4725]: E1002 11:29:40.267745 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.268671 4725 scope.go:117] "RemoveContainer" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:29:40 crc kubenswrapper[4725]: E1002 11:29:40.268926 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.279688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.279738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.279749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.279766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.279775 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:40Z","lastTransitionTime":"2025-10-02T11:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.382175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.382233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.382249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.382270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.382285 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:40Z","lastTransitionTime":"2025-10-02T11:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.484426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.484472 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.484484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.484503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.484514 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:40Z","lastTransitionTime":"2025-10-02T11:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.586813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.586848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.586856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.586869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.586878 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:40Z","lastTransitionTime":"2025-10-02T11:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.689114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.689154 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.689164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.689182 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.689193 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:40Z","lastTransitionTime":"2025-10-02T11:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.791167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.791208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.791221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.791239 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.791251 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:40Z","lastTransitionTime":"2025-10-02T11:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.894536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.894830 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.894840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.894855 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.894866 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:40Z","lastTransitionTime":"2025-10-02T11:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.997681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.997755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.997770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.997787 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:40 crc kubenswrapper[4725]: I1002 11:29:40.997796 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:40Z","lastTransitionTime":"2025-10-02T11:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.101090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.101173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.101196 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.101230 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.101255 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:41Z","lastTransitionTime":"2025-10-02T11:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.204460 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.204515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.204524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.204538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.204548 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:41Z","lastTransitionTime":"2025-10-02T11:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.268133 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.268451 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:41 crc kubenswrapper[4725]: E1002 11:29:41.268600 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:41 crc kubenswrapper[4725]: E1002 11:29:41.268860 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.292542 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.307273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.307334 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.307349 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.307370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.307390 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:41Z","lastTransitionTime":"2025-10-02T11:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.328559 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:26Z\\\",\\\"message\\\":\\\"ing success event on pod openshift-multus/multus-additional-cni-plugins-8rrpk\\\\nI1002 11:29:26.252926 6817 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1002 11:29:26.252661 6817 services_controller.go:443] Built service openshift-console-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.88\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1002 11:29:26.252942 6817 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1002 11:29:26.252952 6817 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI1002 11:29:26.252964 6817 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-zxhp4] creating logical port openshift-multus_network-metrics-daemon-zxhp4 for pod on switch crc\\\\nF1002 11:29:26.252976 6817 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:29:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.345420 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.358167 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.369513 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.386456 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:19Z\\\",\\\"message\\\":\\\"2025-10-02T11:28:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f\\\\n2025-10-02T11:28:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f to /host/opt/cni/bin/\\\\n2025-10-02T11:28:33Z [verbose] multus-daemon started\\\\n2025-10-02T11:28:33Z [verbose] Readiness Indicator file check\\\\n2025-10-02T11:29:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.398609 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0357bd28-47bd-4603-8572-5eed1113d81d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ef69dfbc80f03ea89df8bfc959db06fb614b8759b135d11b688bddc446da13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.408771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.409021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.409115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.409298 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.409417 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:41Z","lastTransitionTime":"2025-10-02T11:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.416802 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.428349 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.440351 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.451568 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.461289 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.470332 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.481268 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.494223 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.504782 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.513135 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.513164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.513171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.513183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.513192 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:41Z","lastTransitionTime":"2025-10-02T11:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.516807 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.525418 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:41Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.615999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.616031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.616041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.616055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.616063 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:41Z","lastTransitionTime":"2025-10-02T11:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.718960 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.719018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.719027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.719045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.719059 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:41Z","lastTransitionTime":"2025-10-02T11:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.822237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.822340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.822355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.822378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.822390 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:41Z","lastTransitionTime":"2025-10-02T11:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.924990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.925063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.925076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.925098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:41 crc kubenswrapper[4725]: I1002 11:29:41.925117 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:41Z","lastTransitionTime":"2025-10-02T11:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.028568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.028630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.028646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.028667 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.028683 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.131540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.131597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.131615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.131644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.131717 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.172338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.172393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.172412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.172435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.172453 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: E1002 11:29:42.189207 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.194296 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.194350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.194367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.194388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.194404 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: E1002 11:29:42.207167 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.211591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.211679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.211716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.211789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.211808 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: E1002 11:29:42.228800 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.232747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.232794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.232811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.232832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.232846 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: E1002 11:29:42.247116 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.252102 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.252433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.252611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.252808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.252993 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.268147 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.268370 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:42 crc kubenswrapper[4725]: E1002 11:29:42.269560 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:42 crc kubenswrapper[4725]: E1002 11:29:42.268835 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:42 crc kubenswrapper[4725]: E1002 11:29:42.272639 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:42Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:42 crc kubenswrapper[4725]: E1002 11:29:42.273400 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.285787 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.285861 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.285874 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.285898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.285917 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.388632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.388707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.388771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.388801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.388818 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.491581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.491993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.492171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.492319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.492469 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.595612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.595677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.595690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.595716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.595753 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.698665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.698761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.698782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.698806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.698823 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.801831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.801900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.801925 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.801954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.801976 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.905546 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.905632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.905656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.905690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:42 crc kubenswrapper[4725]: I1002 11:29:42.905716 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:42Z","lastTransitionTime":"2025-10-02T11:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.009080 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.009153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.009171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.009197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.009214 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:43Z","lastTransitionTime":"2025-10-02T11:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.112070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.112250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.112271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.112294 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.112312 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:43Z","lastTransitionTime":"2025-10-02T11:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.216129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.216184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.216199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.216223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.216237 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:43Z","lastTransitionTime":"2025-10-02T11:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.267756 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:43 crc kubenswrapper[4725]: E1002 11:29:43.267888 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.267970 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:43 crc kubenswrapper[4725]: E1002 11:29:43.268187 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.319584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.319647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.319667 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.319693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.319713 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:43Z","lastTransitionTime":"2025-10-02T11:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.422824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.422890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.422910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.422935 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.422954 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:43Z","lastTransitionTime":"2025-10-02T11:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.526288 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.526356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.526366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.526387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.526399 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:43Z","lastTransitionTime":"2025-10-02T11:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.629066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.629146 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.629170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.629195 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.629214 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:43Z","lastTransitionTime":"2025-10-02T11:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.731799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.731932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.731956 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.731984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.732006 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:43Z","lastTransitionTime":"2025-10-02T11:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.836469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.836547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.836572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.836604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.836628 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:43Z","lastTransitionTime":"2025-10-02T11:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.940165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.940234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.940254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.940278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:43 crc kubenswrapper[4725]: I1002 11:29:43.940295 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:43Z","lastTransitionTime":"2025-10-02T11:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.043087 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.043158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.043180 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.043203 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.043220 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:44Z","lastTransitionTime":"2025-10-02T11:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.145645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.145710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.145811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.145854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.145877 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:44Z","lastTransitionTime":"2025-10-02T11:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.249365 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.249419 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.249436 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.249461 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.249477 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:44Z","lastTransitionTime":"2025-10-02T11:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.268051 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.268098 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:44 crc kubenswrapper[4725]: E1002 11:29:44.268215 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:44 crc kubenswrapper[4725]: E1002 11:29:44.268490 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.352381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.352444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.352460 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.352482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.352500 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:44Z","lastTransitionTime":"2025-10-02T11:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.455275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.455347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.455370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.455398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.455419 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:44Z","lastTransitionTime":"2025-10-02T11:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.558292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.558365 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.558383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.558417 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.558454 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:44Z","lastTransitionTime":"2025-10-02T11:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.662243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.662320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.662331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.662348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.662360 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:44Z","lastTransitionTime":"2025-10-02T11:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.765002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.765045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.765058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.765071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.765080 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:44Z","lastTransitionTime":"2025-10-02T11:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.867837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.867894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.867911 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.867934 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.867951 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:44Z","lastTransitionTime":"2025-10-02T11:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.970276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.970331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.970350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.970370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:44 crc kubenswrapper[4725]: I1002 11:29:44.970387 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:44Z","lastTransitionTime":"2025-10-02T11:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.073148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.073246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.073270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.073297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.073313 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:45Z","lastTransitionTime":"2025-10-02T11:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.176301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.176386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.176411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.176441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.176463 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:45Z","lastTransitionTime":"2025-10-02T11:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.267984 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:45 crc kubenswrapper[4725]: E1002 11:29:45.268188 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.268306 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:45 crc kubenswrapper[4725]: E1002 11:29:45.268563 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.281220 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.281261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.281270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.281285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.281295 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:45Z","lastTransitionTime":"2025-10-02T11:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.384610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.384680 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.384701 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.384757 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.384775 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:45Z","lastTransitionTime":"2025-10-02T11:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.487050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.487079 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.487091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.487106 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.487116 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:45Z","lastTransitionTime":"2025-10-02T11:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.590224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.590282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.590290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.590304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.590329 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:45Z","lastTransitionTime":"2025-10-02T11:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.692964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.693041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.693053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.693068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.693079 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:45Z","lastTransitionTime":"2025-10-02T11:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.795791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.796158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.796284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.796399 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.796501 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:45Z","lastTransitionTime":"2025-10-02T11:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.898798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.898842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.898854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.898871 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:45 crc kubenswrapper[4725]: I1002 11:29:45.898882 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:45Z","lastTransitionTime":"2025-10-02T11:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.001579 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.001626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.001638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.001654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.001665 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:46Z","lastTransitionTime":"2025-10-02T11:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.103834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.103892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.103913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.103940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.103956 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:46Z","lastTransitionTime":"2025-10-02T11:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.206027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.206414 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.206573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.206802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.206988 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:46Z","lastTransitionTime":"2025-10-02T11:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.267688 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.267829 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:46 crc kubenswrapper[4725]: E1002 11:29:46.267961 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:46 crc kubenswrapper[4725]: E1002 11:29:46.268121 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.309650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.309681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.309691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.309706 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.309745 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:46Z","lastTransitionTime":"2025-10-02T11:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.413027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.413092 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.413119 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.413149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.413172 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:46Z","lastTransitionTime":"2025-10-02T11:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.515995 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.516067 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.516092 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.516124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.516148 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:46Z","lastTransitionTime":"2025-10-02T11:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.618938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.619442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.619618 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.619812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.619949 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:46Z","lastTransitionTime":"2025-10-02T11:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.723340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.723796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.723996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.724212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.724437 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:46Z","lastTransitionTime":"2025-10-02T11:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.827958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.828026 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.828045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.828069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.828088 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:46Z","lastTransitionTime":"2025-10-02T11:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.930146 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.930181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.930190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.930204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:46 crc kubenswrapper[4725]: I1002 11:29:46.930213 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:46Z","lastTransitionTime":"2025-10-02T11:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.033400 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.033448 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.033461 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.033476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.033485 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:47Z","lastTransitionTime":"2025-10-02T11:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.134893 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.135140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.135238 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.135349 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.135449 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:47Z","lastTransitionTime":"2025-10-02T11:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.237744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.237787 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.237796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.237810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.237819 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:47Z","lastTransitionTime":"2025-10-02T11:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.267463 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.267499 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:47 crc kubenswrapper[4725]: E1002 11:29:47.267581 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:47 crc kubenswrapper[4725]: E1002 11:29:47.267686 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.340469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.340509 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.340523 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.340537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.340546 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:47Z","lastTransitionTime":"2025-10-02T11:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.442884 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.442946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.442963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.442989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.443007 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:47Z","lastTransitionTime":"2025-10-02T11:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.545633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.545682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.545693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.545708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.545717 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:47Z","lastTransitionTime":"2025-10-02T11:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.648853 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.648920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.648943 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.648973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.648993 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:47Z","lastTransitionTime":"2025-10-02T11:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.752024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.752061 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.752069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.752084 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.752093 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:47Z","lastTransitionTime":"2025-10-02T11:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.854223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.854272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.854291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.854354 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.854369 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:47Z","lastTransitionTime":"2025-10-02T11:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.956698 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.956761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.956773 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.956791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:47 crc kubenswrapper[4725]: I1002 11:29:47.956803 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:47Z","lastTransitionTime":"2025-10-02T11:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.059368 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.059423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.059436 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.059454 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.059465 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:48Z","lastTransitionTime":"2025-10-02T11:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.112307 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:48 crc kubenswrapper[4725]: E1002 11:29:48.112486 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:29:48 crc kubenswrapper[4725]: E1002 11:29:48.112569 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs podName:a6af8c70-d2e8-4891-bf65-1deb3fb02044 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.112551626 +0000 UTC m=+172.020051089 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs") pod "network-metrics-daemon-zxhp4" (UID: "a6af8c70-d2e8-4891-bf65-1deb3fb02044") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.162459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.162510 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.162522 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.162537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.162549 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:48Z","lastTransitionTime":"2025-10-02T11:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.268252 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:48 crc kubenswrapper[4725]: E1002 11:29:48.268535 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.268905 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:48 crc kubenswrapper[4725]: E1002 11:29:48.269121 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.269390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.269466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.269505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.269534 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.269553 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:48Z","lastTransitionTime":"2025-10-02T11:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.372373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.372403 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.372411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.372424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.372433 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:48Z","lastTransitionTime":"2025-10-02T11:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.476216 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.476376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.476397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.476426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.476449 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:48Z","lastTransitionTime":"2025-10-02T11:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.578929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.578980 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.578997 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.579021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.579038 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:48Z","lastTransitionTime":"2025-10-02T11:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.682316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.682379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.682397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.682422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.682439 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:48Z","lastTransitionTime":"2025-10-02T11:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.785761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.785826 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.785839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.785862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.785877 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:48Z","lastTransitionTime":"2025-10-02T11:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.888264 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.888316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.888332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.888351 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.888363 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:48Z","lastTransitionTime":"2025-10-02T11:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.990826 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.990901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.990919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.990946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:48 crc kubenswrapper[4725]: I1002 11:29:48.990965 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:48Z","lastTransitionTime":"2025-10-02T11:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.093597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.093659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.093676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.093701 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.093746 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:49Z","lastTransitionTime":"2025-10-02T11:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.196645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.196716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.196778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.196808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.196828 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:49Z","lastTransitionTime":"2025-10-02T11:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.268022 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:49 crc kubenswrapper[4725]: E1002 11:29:49.268192 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.268309 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:49 crc kubenswrapper[4725]: E1002 11:29:49.268512 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.299365 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.299434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.299459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.299488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.299508 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:49Z","lastTransitionTime":"2025-10-02T11:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.402794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.402862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.402880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.402905 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.402923 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:49Z","lastTransitionTime":"2025-10-02T11:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.506691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.506795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.506813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.506837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.506856 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:49Z","lastTransitionTime":"2025-10-02T11:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.609347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.609394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.609406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.609423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.609434 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:49Z","lastTransitionTime":"2025-10-02T11:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.712643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.712705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.712759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.712793 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.712816 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:49Z","lastTransitionTime":"2025-10-02T11:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.815764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.815821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.815834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.815861 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.815877 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:49Z","lastTransitionTime":"2025-10-02T11:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.918498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.918605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.918632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.918667 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:49 crc kubenswrapper[4725]: I1002 11:29:49.918690 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:49Z","lastTransitionTime":"2025-10-02T11:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.021206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.021265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.021284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.021309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.021327 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:50Z","lastTransitionTime":"2025-10-02T11:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.123267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.123312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.123322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.123335 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.123346 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:50Z","lastTransitionTime":"2025-10-02T11:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.225614 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.225665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.225677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.225693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.225706 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:50Z","lastTransitionTime":"2025-10-02T11:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.267762 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:50 crc kubenswrapper[4725]: E1002 11:29:50.267903 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.267761 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:50 crc kubenswrapper[4725]: E1002 11:29:50.268145 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.328821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.328885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.328903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.328929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.328945 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:50Z","lastTransitionTime":"2025-10-02T11:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.431978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.432019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.432027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.432041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.432051 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:50Z","lastTransitionTime":"2025-10-02T11:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.534756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.534810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.534826 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.534848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.534865 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:50Z","lastTransitionTime":"2025-10-02T11:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.637297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.637352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.637361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.637376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.637386 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:50Z","lastTransitionTime":"2025-10-02T11:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.739993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.740064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.740075 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.740090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.740099 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:50Z","lastTransitionTime":"2025-10-02T11:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.845521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.845560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.845569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.845583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.845592 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:50Z","lastTransitionTime":"2025-10-02T11:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.948284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.948340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.948363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.948391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:50 crc kubenswrapper[4725]: I1002 11:29:50.948413 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:50Z","lastTransitionTime":"2025-10-02T11:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.052024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.052116 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.052142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.052168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.052190 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:51Z","lastTransitionTime":"2025-10-02T11:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.155550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.155628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.155644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.155696 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.155714 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:51Z","lastTransitionTime":"2025-10-02T11:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.259203 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.259267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.259295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.259545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.259563 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:51Z","lastTransitionTime":"2025-10-02T11:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.267965 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.268039 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:51 crc kubenswrapper[4725]: E1002 11:29:51.268106 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:51 crc kubenswrapper[4725]: E1002 11:29:51.268239 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.293131 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bdc97c5dc7c8ef3e492ce5e985b45d53589b535a64e1467bc190e638594d25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5624a06fa5b0517a4b81ff5ca3b982e0fe5e4fd7ab3a2ab2ce2f67fd987ce6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.310810 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:26Z\\\",\\\"message\\\":\\\"ing success event on pod openshift-multus/multus-additional-cni-plugins-8rrpk\\\\nI1002 11:29:26.252926 6817 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1002 11:29:26.252661 6817 services_controller.go:443] Built service openshift-console-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.88\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1002 11:29:26.252942 6817 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1002 11:29:26.252952 6817 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI1002 11:29:26.252964 6817 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-zxhp4] creating logical port openshift-multus_network-metrics-daemon-zxhp4 for pod on switch crc\\\\nF1002 11:29:26.252976 6817 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:29:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c2hv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.326294 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03ae1553-963d-477c-93af-3c54f1b2b261\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bcd37ffcafd3d50c3e71e1c3389e29648f16a5f879518abe539244836d4d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60ecfb9a4d7954225eaae212170ef1c9fbececedfe241e1878624f159bc7e82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfs9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2np6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.340597 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.359021 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e8bd543dee83a7735f06325734543d70ae5e323362573e562f34318eab04cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.361679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.361770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.361794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.361824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.361848 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:51Z","lastTransitionTime":"2025-10-02T11:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.376916 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dafbaad0345709025d6c342fbd095cb8a8c2c007370da1ef0cb61e7a42ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.397030 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9bad7c-78f8-435d-8449-7c5b04a16869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdb1a74a86f0f272191a0b3df31275af3859296c95d18fa890e796ccc2a2c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmmw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lv8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.422260 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2q2jl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15fc62f2-0a7e-477c-8e35-0888c40e2d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T11:29:19Z\\\",\\\"message\\\":\\\"2025-10-02T11:28:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f\\\\n2025-10-02T11:28:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_719242da-bed8-47de-9a07-49ec7dd5136f to /host/opt/cni/bin/\\\\n2025-10-02T11:28:33Z [verbose] multus-daemon started\\\\n2025-10-02T11:28:33Z [verbose] Readiness Indicator file check\\\\n2025-10-02T11:29:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2q2jl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.436466 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0357bd28-47bd-4603-8572-5eed1113d81d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2ef69dfbc80f03ea89df8bfc959db06fb614b8759b135d11b688bddc446da13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf433c1dfbd836e75967cbf01e5ace52948f79cc67685547771fcc2bfa6207d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.450137 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1348ec15-b67e-421d-9f2d-f007f24ea2ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2be502859148ebf2ae33e4d150ce71e0dbf25c03ec8f769a98a36b4e9197c1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0105a7ae9cd31cccf6ec90e46c9db38c18c4b25a0f3f45c2447d7a07f9986cd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823aff5ee371658bf2adda69948c3768b0ca6f4658b26492a7c6a03e5a022a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261c48dbc43d4d259e7f2d1b4d12c487e81e5d64661896bf92abacdc04924674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb24e45c2907bf226a254de89fc3bba0849c6faec306a2f20a13834e74d32a7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1002 11:28:23.188580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 11:28:23.189464 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1803318153/tls.crt::/tmp/serving-cert-1803318153/tls.key\\\\\\\"\\\\nI1002 11:28:29.166876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 11:28:29.173989 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 11:28:29.174017 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 11:28:29.174045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 11:28:29.174053 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 11:28:29.179316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 11:28:29.179382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179409 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 11:28:29.179431 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 11:28:29.179451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1002 11:28:29.179471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 11:28:29.179491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1002 11:28:29.179614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1002 11:28:29.180743 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ac4109ae75d9908eb5f4269ef6dae5fca34bcd8af44db256ef5e6f5a75a40a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ace6b615bcb9737eded5fc1efd169d44f02081f4e3eda8c50fb328199988f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.459735 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97d459df-3c16-4858-b52a-7cd9d866e5a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ef59ae3c2937d1b8ea1253bcee19d4c5088aa0f2219738802389276814effef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0c6536b3dceb6d94853a1ed4a53c7fa8c7223ad92b88aca84d19a15dd948b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d5cb966c9878b4760e5efcd235f72571dc54b108b555b1943c08fbbaf4fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207361f8078f8972ea41bd47515264f60ebd3e668b095bb64e5234057d237bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.463537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.463564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.463572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.463584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.463593 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:51Z","lastTransitionTime":"2025-10-02T11:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.469709 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zs4dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05ab4d5f-f28b-40a8-af40-baa85450dec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67632d6ba2e4cd987221a4f876555f149e82e2499740a0a8f560f272f6858536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrnkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zs4dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.481677 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a370297b-12f2-48c3-9097-f8727e57baa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac73ac04ec57ebbee560a16fb5323214c200812a08cbfa2829a061e552393a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6a06eeaf72da9e898095ed62d1f5e50c39c7d9c6e315909b9e96ff5e237fd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b52e8eb4e94eb8fe466e84684862f9890787c8f20bfefb12eafdb585f733af79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965c4cfcde1d4fbe9ff9776df9a8dc020425264e880fcb9b16a2095da26b6490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be3277ddaba2efc3469ace120f2ebe62e8656ae7987be5f52f0630ecc159423\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7a35af6c95f0bf07ba32e3afc5f35f915a8ab7b0e291eab820b7e3de49c2130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7451ebae20a3a434a35d9b0c7d98f3d64450ad3a5f3904604a5f557ee62d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T11:28:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T11:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4rkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8rrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.489850 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7n6ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"045b0cc5-fa2d-4dbe-89eb-80e841e6c947\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce884b2c383006c2ebeb6084dc4a8ce8e5e9d7f30623988eb12e3d0815a908b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7n6ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.501012 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6af8c70-d2e8-4891-bf65-1deb3fb02044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6xjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zxhp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.513903 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b02b6025-ee01-42fe-8659-fa6c3f7f1717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d43ea4a39a19431e75e43da61ebb9c1d11fde8a7541ce7360bb599902639fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e3a2a4a5a7a513c759cf7a45ba89e718198014def75331d1d922f1c5eecab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f43f019f9f03304f55b165e734a5ffccbca0781728be174c5e0a763f717a8f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cacfffaf22ac753874a183aad0f2724c341ac436e7492131e8b3bd4392239a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T11:28:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.526862 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.537285 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T11:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:51Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.565869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.565905 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.565914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.565930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.565941 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:51Z","lastTransitionTime":"2025-10-02T11:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.668659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.668709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.668740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.668756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.668766 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:51Z","lastTransitionTime":"2025-10-02T11:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.771489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.771569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.771593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.771617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.771636 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:51Z","lastTransitionTime":"2025-10-02T11:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.874429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.874897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.875108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.875272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.875424 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:51Z","lastTransitionTime":"2025-10-02T11:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.978575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.978620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.978636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.978656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:51 crc kubenswrapper[4725]: I1002 11:29:51.978671 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:51Z","lastTransitionTime":"2025-10-02T11:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.082311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.082399 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.082424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.082468 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.082490 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.186194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.186248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.186264 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.186299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.186318 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.267358 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.267522 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:52 crc kubenswrapper[4725]: E1002 11:29:52.267561 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:52 crc kubenswrapper[4725]: E1002 11:29:52.267782 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.289518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.289585 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.289600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.289624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.289639 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.297888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.297923 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.297934 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.297954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.297969 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: E1002 11:29:52.311442 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:52Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.315001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.315043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.315057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.315077 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.315092 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: E1002 11:29:52.330478 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:52Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.336108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.336163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.336181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.336206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.336267 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: E1002 11:29:52.356038 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:52Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.360477 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.360715 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.360782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.360808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.360827 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: E1002 11:29:52.381716 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:52Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.386997 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.387043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.387053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.387087 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.387097 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: E1002 11:29:52.403871 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T11:29:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd4a54b9-5b28-4a58-841d-d4e30c2ffaa7\\\",\\\"systemUUID\\\":\\\"40cbc71f-67e8-47ed-8b97-d7af0f87b7bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T11:29:52Z is after 2025-08-24T17:21:41Z" Oct 02 11:29:52 crc kubenswrapper[4725]: E1002 11:29:52.404003 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.405346 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.405392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.405400 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.405413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.405423 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.507614 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.507653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.507665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.507684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.507699 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.610481 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.610547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.610565 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.610589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.610606 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.713099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.713157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.713169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.713186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.713195 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.815957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.816007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.816019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.816035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.816047 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.918117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.918161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.918173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.918191 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:52 crc kubenswrapper[4725]: I1002 11:29:52.918204 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:52Z","lastTransitionTime":"2025-10-02T11:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.020916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.020974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.020990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.021013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.021030 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:53Z","lastTransitionTime":"2025-10-02T11:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.123690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.123808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.123840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.123880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.123910 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:53Z","lastTransitionTime":"2025-10-02T11:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.227086 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.227166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.227189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.227219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.227242 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:53Z","lastTransitionTime":"2025-10-02T11:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.269044 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:53 crc kubenswrapper[4725]: E1002 11:29:53.269218 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.269868 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:53 crc kubenswrapper[4725]: E1002 11:29:53.270020 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.270287 4725 scope.go:117] "RemoveContainer" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:29:53 crc kubenswrapper[4725]: E1002 11:29:53.270474 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.329680 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.329795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.329814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.329840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.329858 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:53Z","lastTransitionTime":"2025-10-02T11:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.432853 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.432892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.432901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.432914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.432923 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:53Z","lastTransitionTime":"2025-10-02T11:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.535308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.535358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.535371 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.535389 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.535405 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:53Z","lastTransitionTime":"2025-10-02T11:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.637829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.637890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.637907 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.637930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.637949 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:53Z","lastTransitionTime":"2025-10-02T11:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.740895 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.740946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.740958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.740976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.740988 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:53Z","lastTransitionTime":"2025-10-02T11:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.843059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.843100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.843112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.843128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.843140 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:53Z","lastTransitionTime":"2025-10-02T11:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.945658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.945761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.945800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.945853 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:53 crc kubenswrapper[4725]: I1002 11:29:53.945879 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:53Z","lastTransitionTime":"2025-10-02T11:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.049163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.049201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.049209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.049222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.049231 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:54Z","lastTransitionTime":"2025-10-02T11:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.151315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.151390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.151415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.151445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.151472 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:54Z","lastTransitionTime":"2025-10-02T11:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.254387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.254432 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.254442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.254458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.254468 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:54Z","lastTransitionTime":"2025-10-02T11:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.267076 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.267088 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:54 crc kubenswrapper[4725]: E1002 11:29:54.267230 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:54 crc kubenswrapper[4725]: E1002 11:29:54.267337 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.357124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.357187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.357205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.357231 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.357249 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:54Z","lastTransitionTime":"2025-10-02T11:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.459811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.459878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.459889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.459911 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.459924 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:54Z","lastTransitionTime":"2025-10-02T11:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.562284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.562316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.562323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.562336 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.562344 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:54Z","lastTransitionTime":"2025-10-02T11:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.665204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.665255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.665267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.665291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.665307 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:54Z","lastTransitionTime":"2025-10-02T11:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.767892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.767967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.767990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.768022 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.768044 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:54Z","lastTransitionTime":"2025-10-02T11:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.871252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.871321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.871344 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.871372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.871393 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:54Z","lastTransitionTime":"2025-10-02T11:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.973834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.973903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.973927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.973955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:54 crc kubenswrapper[4725]: I1002 11:29:54.973975 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:54Z","lastTransitionTime":"2025-10-02T11:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.077052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.077112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.077129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.077152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.077206 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:55Z","lastTransitionTime":"2025-10-02T11:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.179512 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.179543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.179550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.179562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.179571 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:55Z","lastTransitionTime":"2025-10-02T11:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.267020 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:55 crc kubenswrapper[4725]: E1002 11:29:55.267201 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.267998 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:55 crc kubenswrapper[4725]: E1002 11:29:55.268345 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.282414 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.282447 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.282485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.282501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.282513 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:55Z","lastTransitionTime":"2025-10-02T11:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.283693 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.385994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.386404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.386605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.386837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.386981 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:55Z","lastTransitionTime":"2025-10-02T11:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.489157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.489181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.489189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.489203 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.489211 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:55Z","lastTransitionTime":"2025-10-02T11:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.591855 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.591936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.591978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.592011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.592033 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:55Z","lastTransitionTime":"2025-10-02T11:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.695132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.695181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.695190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.695209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.695221 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:55Z","lastTransitionTime":"2025-10-02T11:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.798063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.798106 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.798118 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.798135 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.798148 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:55Z","lastTransitionTime":"2025-10-02T11:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.900855 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.900889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.900899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.900913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:55 crc kubenswrapper[4725]: I1002 11:29:55.900923 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:55Z","lastTransitionTime":"2025-10-02T11:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.003528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.003844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.003857 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.003873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.003882 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:56Z","lastTransitionTime":"2025-10-02T11:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.106652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.106750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.106767 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.106789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.106802 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:56Z","lastTransitionTime":"2025-10-02T11:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.209778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.209825 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.209837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.209856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.209868 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:56Z","lastTransitionTime":"2025-10-02T11:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.268027 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:56 crc kubenswrapper[4725]: E1002 11:29:56.268423 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.268027 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:56 crc kubenswrapper[4725]: E1002 11:29:56.268617 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.312948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.312985 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.312996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.313012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.313023 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:56Z","lastTransitionTime":"2025-10-02T11:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.415166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.415471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.415710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.416174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.416329 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:56Z","lastTransitionTime":"2025-10-02T11:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.518839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.518877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.518887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.518905 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.518916 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:56Z","lastTransitionTime":"2025-10-02T11:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.621661 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.621716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.621765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.621784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.621799 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:56Z","lastTransitionTime":"2025-10-02T11:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.725378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.725431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.725443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.725462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.725474 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:56Z","lastTransitionTime":"2025-10-02T11:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.828378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.828466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.828483 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.828503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.828518 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:56Z","lastTransitionTime":"2025-10-02T11:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.931125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.931195 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.931208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.931234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:56 crc kubenswrapper[4725]: I1002 11:29:56.931253 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:56Z","lastTransitionTime":"2025-10-02T11:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.034633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.034681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.034690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.034705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.034714 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:57Z","lastTransitionTime":"2025-10-02T11:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.138214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.138260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.138269 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.138283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.138293 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:57Z","lastTransitionTime":"2025-10-02T11:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.240633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.240699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.240715 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.240775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.240793 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:57Z","lastTransitionTime":"2025-10-02T11:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.267982 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.268108 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:57 crc kubenswrapper[4725]: E1002 11:29:57.268195 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:57 crc kubenswrapper[4725]: E1002 11:29:57.268337 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.343996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.344065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.344090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.344120 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.344143 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:57Z","lastTransitionTime":"2025-10-02T11:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.446890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.446944 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.446954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.446975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.446987 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:57Z","lastTransitionTime":"2025-10-02T11:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.550289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.550761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.550869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.550967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.551094 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:57Z","lastTransitionTime":"2025-10-02T11:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.654206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.654264 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.654280 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.654303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.654319 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:57Z","lastTransitionTime":"2025-10-02T11:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.757250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.757324 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.757350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.757377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.757397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:57Z","lastTransitionTime":"2025-10-02T11:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.860633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.860757 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.860778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.860803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.860822 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:57Z","lastTransitionTime":"2025-10-02T11:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.963663 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.963790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.963822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.963854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:57 crc kubenswrapper[4725]: I1002 11:29:57.963874 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:57Z","lastTransitionTime":"2025-10-02T11:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.066840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.067131 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.067224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.067312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.067393 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:58Z","lastTransitionTime":"2025-10-02T11:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.169153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.169457 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.169559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.169666 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.169790 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:58Z","lastTransitionTime":"2025-10-02T11:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.267382 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:29:58 crc kubenswrapper[4725]: E1002 11:29:58.267513 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.267747 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:29:58 crc kubenswrapper[4725]: E1002 11:29:58.267881 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.272924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.272985 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.273007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.273033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.273052 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:58Z","lastTransitionTime":"2025-10-02T11:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.376851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.376927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.376950 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.376980 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.377002 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:58Z","lastTransitionTime":"2025-10-02T11:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.480412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.480483 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.480503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.480532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.480554 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:58Z","lastTransitionTime":"2025-10-02T11:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.583335 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.583824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.584011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.584229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.584410 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:58Z","lastTransitionTime":"2025-10-02T11:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.687328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.687379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.687396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.687417 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.687434 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:58Z","lastTransitionTime":"2025-10-02T11:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.789795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.789840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.789849 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.789862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.789871 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:58Z","lastTransitionTime":"2025-10-02T11:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.892646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.892702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.892713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.892779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.892791 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:58Z","lastTransitionTime":"2025-10-02T11:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.994867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.994957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.994982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.995049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:58 crc kubenswrapper[4725]: I1002 11:29:58.995068 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:58Z","lastTransitionTime":"2025-10-02T11:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.098227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.098301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.098325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.098356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.098378 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:59Z","lastTransitionTime":"2025-10-02T11:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.201034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.201104 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.201128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.201157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.201180 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:59Z","lastTransitionTime":"2025-10-02T11:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.267862 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.267908 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:29:59 crc kubenswrapper[4725]: E1002 11:29:59.268317 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:29:59 crc kubenswrapper[4725]: E1002 11:29:59.268401 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.303646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.303716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.303761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.303785 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.303799 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:59Z","lastTransitionTime":"2025-10-02T11:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.406688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.406822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.406845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.406870 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.406888 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:59Z","lastTransitionTime":"2025-10-02T11:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.511518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.511570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.511585 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.511605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.511618 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:59Z","lastTransitionTime":"2025-10-02T11:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.614121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.614157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.614168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.614186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.614200 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:59Z","lastTransitionTime":"2025-10-02T11:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.717127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.717183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.717205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.717255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.717273 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:59Z","lastTransitionTime":"2025-10-02T11:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.820012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.820097 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.820129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.820158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.820180 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:59Z","lastTransitionTime":"2025-10-02T11:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.923133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.923169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.923178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.923189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:29:59 crc kubenswrapper[4725]: I1002 11:29:59.923198 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:29:59Z","lastTransitionTime":"2025-10-02T11:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.024963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.025391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.025431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.025462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.025485 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:00Z","lastTransitionTime":"2025-10-02T11:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.128821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.128906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.128924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.128943 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.128968 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:00Z","lastTransitionTime":"2025-10-02T11:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.230884 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.230938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.230955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.230976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.230992 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:00Z","lastTransitionTime":"2025-10-02T11:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.267761 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.267811 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:00 crc kubenswrapper[4725]: E1002 11:30:00.267938 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:00 crc kubenswrapper[4725]: E1002 11:30:00.268029 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.334801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.334881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.334905 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.334930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.334951 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:00Z","lastTransitionTime":"2025-10-02T11:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.437528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.437573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.437584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.437598 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.437609 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:00Z","lastTransitionTime":"2025-10-02T11:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.540276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.540319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.540329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.540345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.540355 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:00Z","lastTransitionTime":"2025-10-02T11:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.642542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.642580 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.642592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.642608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.642618 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:00Z","lastTransitionTime":"2025-10-02T11:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.746075 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.746167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.746191 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.746225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.746248 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:00Z","lastTransitionTime":"2025-10-02T11:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.848307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.848348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.848367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.848381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.848390 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:00Z","lastTransitionTime":"2025-10-02T11:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.951388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.951449 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.951459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.951475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:00 crc kubenswrapper[4725]: I1002 11:30:00.951488 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:00Z","lastTransitionTime":"2025-10-02T11:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.053784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.053868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.053902 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.053933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.053955 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:01Z","lastTransitionTime":"2025-10-02T11:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.157071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.157152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.157172 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.157191 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.157203 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:01Z","lastTransitionTime":"2025-10-02T11:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:01 crc kubenswrapper[4725]: E1002 11:30:01.257420 4725 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.267273 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.267292 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:01 crc kubenswrapper[4725]: E1002 11:30:01.267516 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:01 crc kubenswrapper[4725]: E1002 11:30:01.267622 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.285285 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2q2jl" podStartSLOduration=92.285270711 podStartE2EDuration="1m32.285270711s" podCreationTimestamp="2025-10-02 11:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:01.284708115 +0000 UTC m=+121.192207578" watchObservedRunningTime="2025-10-02 11:30:01.285270711 +0000 UTC m=+121.192770174" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.294910 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=35.294890176 podStartE2EDuration="35.294890176s" podCreationTimestamp="2025-10-02 11:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:01.29391568 +0000 UTC m=+121.201415143" watchObservedRunningTime="2025-10-02 11:30:01.294890176 +0000 UTC m=+121.202389639" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.309643 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.309623523 podStartE2EDuration="1m32.309623523s" podCreationTimestamp="2025-10-02 11:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:01.309620763 +0000 UTC m=+121.217120226" watchObservedRunningTime="2025-10-02 11:30:01.309623523 +0000 UTC m=+121.217122996" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.350356 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=71.350336326 podStartE2EDuration="1m11.350336326s" podCreationTimestamp="2025-10-02 11:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:01.350040318 +0000 UTC m=+121.257539791" watchObservedRunningTime="2025-10-02 11:30:01.350336326 +0000 UTC m=+121.257835789" Oct 02 11:30:01 crc kubenswrapper[4725]: E1002 11:30:01.410149 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.431989 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podStartSLOduration=92.431973237 podStartE2EDuration="1m32.431973237s" podCreationTimestamp="2025-10-02 11:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:01.421958461 +0000 UTC m=+121.329457924" watchObservedRunningTime="2025-10-02 11:30:01.431973237 +0000 UTC m=+121.339472700" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.449704 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zs4dp" podStartSLOduration=92.449683426 podStartE2EDuration="1m32.449683426s" podCreationTimestamp="2025-10-02 11:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:01.432508833 +0000 UTC m=+121.340008296" watchObservedRunningTime="2025-10-02 11:30:01.449683426 +0000 UTC m=+121.357182899" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.465173 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=92.465153713 podStartE2EDuration="1m32.465153713s" podCreationTimestamp="2025-10-02 11:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:01.44947175 +0000 UTC m=+121.356971213" watchObservedRunningTime="2025-10-02 11:30:01.465153713 +0000 UTC m=+121.372653176" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.504490 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8rrpk" podStartSLOduration=91.504473897 podStartE2EDuration="1m31.504473897s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:01.503753228 +0000 UTC m=+121.411252701" watchObservedRunningTime="2025-10-02 11:30:01.504473897 +0000 UTC m=+121.411973360" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.530377 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7n6ff" podStartSLOduration=91.530358482 podStartE2EDuration="1m31.530358482s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:01.518581196 +0000 UTC m=+121.426080659" watchObservedRunningTime="2025-10-02 11:30:01.530358482 +0000 UTC m=+121.437857945" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.574218 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2np6s" podStartSLOduration=90.57419994 podStartE2EDuration="1m30.57419994s" podCreationTimestamp="2025-10-02 11:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:01.573690107 +0000 UTC m=+121.481189590" watchObservedRunningTime="2025-10-02 11:30:01.57419994 +0000 UTC m=+121.481699403" Oct 02 11:30:01 crc kubenswrapper[4725]: I1002 11:30:01.601260 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=6.601245276 podStartE2EDuration="6.601245276s" podCreationTimestamp="2025-10-02 11:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:01.600490066 +0000 UTC m=+121.507989529" watchObservedRunningTime="2025-10-02 11:30:01.601245276 +0000 UTC m=+121.508744729" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.267110 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.267133 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:02 crc kubenswrapper[4725]: E1002 11:30:02.267282 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:02 crc kubenswrapper[4725]: E1002 11:30:02.267479 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.742833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.742865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.742873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.742887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.742897 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T11:30:02Z","lastTransitionTime":"2025-10-02T11:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.785532 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv"] Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.786121 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.788310 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.788995 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.789911 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.790110 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.879572 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/085eb937-0d12-4b9d-8d0b-345e6d907f06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.879677 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/085eb937-0d12-4b9d-8d0b-345e6d907f06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.879716 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/085eb937-0d12-4b9d-8d0b-345e6d907f06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.879811 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/085eb937-0d12-4b9d-8d0b-345e6d907f06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.879844 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/085eb937-0d12-4b9d-8d0b-345e6d907f06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.981366 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/085eb937-0d12-4b9d-8d0b-345e6d907f06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.981461 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/085eb937-0d12-4b9d-8d0b-345e6d907f06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.981503 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/085eb937-0d12-4b9d-8d0b-345e6d907f06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.981531 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/085eb937-0d12-4b9d-8d0b-345e6d907f06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.981556 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/085eb937-0d12-4b9d-8d0b-345e6d907f06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.981520 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/085eb937-0d12-4b9d-8d0b-345e6d907f06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.981677 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/085eb937-0d12-4b9d-8d0b-345e6d907f06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.984083 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/085eb937-0d12-4b9d-8d0b-345e6d907f06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:02 crc kubenswrapper[4725]: I1002 11:30:02.989428 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/085eb937-0d12-4b9d-8d0b-345e6d907f06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:03 crc kubenswrapper[4725]: I1002 11:30:03.002581 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/085eb937-0d12-4b9d-8d0b-345e6d907f06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vmkpv\" (UID: \"085eb937-0d12-4b9d-8d0b-345e6d907f06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:03 crc kubenswrapper[4725]: I1002 11:30:03.104600 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" Oct 02 11:30:03 crc kubenswrapper[4725]: W1002 11:30:03.127169 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod085eb937_0d12_4b9d_8d0b_345e6d907f06.slice/crio-6c95bd7858d26eba22cf1b551dd0c0b23f8c3f554e9469e87e4f46e7554d4575 WatchSource:0}: Error finding container 6c95bd7858d26eba22cf1b551dd0c0b23f8c3f554e9469e87e4f46e7554d4575: Status 404 returned error can't find the container with id 6c95bd7858d26eba22cf1b551dd0c0b23f8c3f554e9469e87e4f46e7554d4575 Oct 02 11:30:03 crc kubenswrapper[4725]: I1002 11:30:03.181315 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" event={"ID":"085eb937-0d12-4b9d-8d0b-345e6d907f06","Type":"ContainerStarted","Data":"6c95bd7858d26eba22cf1b551dd0c0b23f8c3f554e9469e87e4f46e7554d4575"} Oct 02 11:30:03 crc kubenswrapper[4725]: I1002 11:30:03.267388 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:03 crc kubenswrapper[4725]: E1002 11:30:03.267520 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:03 crc kubenswrapper[4725]: I1002 11:30:03.267388 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:03 crc kubenswrapper[4725]: E1002 11:30:03.267706 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:04 crc kubenswrapper[4725]: I1002 11:30:04.185967 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" event={"ID":"085eb937-0d12-4b9d-8d0b-345e6d907f06","Type":"ContainerStarted","Data":"a2ceb4a80a9a980b4c0d077e3ffc94cd21af4b360e8fc0e057f5b55b69bcda19"} Oct 02 11:30:04 crc kubenswrapper[4725]: I1002 11:30:04.198963 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmkpv" podStartSLOduration=95.198940117 podStartE2EDuration="1m35.198940117s" podCreationTimestamp="2025-10-02 11:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:04.197339392 +0000 UTC m=+124.104838855" watchObservedRunningTime="2025-10-02 11:30:04.198940117 +0000 UTC m=+124.106439590" Oct 02 11:30:04 crc kubenswrapper[4725]: I1002 11:30:04.267142 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:04 crc kubenswrapper[4725]: I1002 11:30:04.267139 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:04 crc kubenswrapper[4725]: E1002 11:30:04.267284 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:04 crc kubenswrapper[4725]: E1002 11:30:04.267428 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:05 crc kubenswrapper[4725]: I1002 11:30:05.189510 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2q2jl_15fc62f2-0a7e-477c-8e35-0888c40e2d6c/kube-multus/1.log" Oct 02 11:30:05 crc kubenswrapper[4725]: I1002 11:30:05.190082 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2q2jl_15fc62f2-0a7e-477c-8e35-0888c40e2d6c/kube-multus/0.log" Oct 02 11:30:05 crc kubenswrapper[4725]: I1002 11:30:05.190118 4725 generic.go:334] "Generic (PLEG): container finished" podID="15fc62f2-0a7e-477c-8e35-0888c40e2d6c" containerID="81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e" exitCode=1 Oct 02 11:30:05 crc kubenswrapper[4725]: I1002 11:30:05.190262 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2q2jl" event={"ID":"15fc62f2-0a7e-477c-8e35-0888c40e2d6c","Type":"ContainerDied","Data":"81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e"} Oct 02 11:30:05 crc kubenswrapper[4725]: I1002 11:30:05.190311 4725 scope.go:117] "RemoveContainer" containerID="3b51d1d6e35f9009884d0d8b96f9c017edf1fe2198966b1b937c12e18816c78a" Oct 02 11:30:05 crc kubenswrapper[4725]: I1002 11:30:05.190731 4725 scope.go:117] "RemoveContainer" containerID="81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e" Oct 02 11:30:05 crc kubenswrapper[4725]: E1002 11:30:05.191595 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2q2jl_openshift-multus(15fc62f2-0a7e-477c-8e35-0888c40e2d6c)\"" pod="openshift-multus/multus-2q2jl" podUID="15fc62f2-0a7e-477c-8e35-0888c40e2d6c" Oct 02 11:30:05 crc kubenswrapper[4725]: I1002 11:30:05.267830 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:05 crc kubenswrapper[4725]: I1002 11:30:05.268054 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:05 crc kubenswrapper[4725]: E1002 11:30:05.268665 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:05 crc kubenswrapper[4725]: E1002 11:30:05.268934 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:05 crc kubenswrapper[4725]: I1002 11:30:05.270584 4725 scope.go:117] "RemoveContainer" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:30:05 crc kubenswrapper[4725]: E1002 11:30:05.270857 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c2hv_openshift-ovn-kubernetes(d6cd2823-e7fc-454e-9ec2-e3dcc81472e2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" Oct 02 11:30:06 crc kubenswrapper[4725]: I1002 11:30:06.196123 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2q2jl_15fc62f2-0a7e-477c-8e35-0888c40e2d6c/kube-multus/1.log" Oct 02 11:30:06 crc kubenswrapper[4725]: I1002 11:30:06.267890 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:06 crc kubenswrapper[4725]: I1002 11:30:06.267925 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:06 crc kubenswrapper[4725]: E1002 11:30:06.268055 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:06 crc kubenswrapper[4725]: E1002 11:30:06.268167 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:06 crc kubenswrapper[4725]: E1002 11:30:06.413488 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 11:30:07 crc kubenswrapper[4725]: I1002 11:30:07.267257 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:07 crc kubenswrapper[4725]: I1002 11:30:07.267287 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:07 crc kubenswrapper[4725]: E1002 11:30:07.267478 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:07 crc kubenswrapper[4725]: E1002 11:30:07.267559 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:08 crc kubenswrapper[4725]: I1002 11:30:08.267600 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:08 crc kubenswrapper[4725]: I1002 11:30:08.267601 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:08 crc kubenswrapper[4725]: E1002 11:30:08.267859 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:08 crc kubenswrapper[4725]: E1002 11:30:08.267931 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:09 crc kubenswrapper[4725]: I1002 11:30:09.267350 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:09 crc kubenswrapper[4725]: I1002 11:30:09.267362 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:09 crc kubenswrapper[4725]: E1002 11:30:09.267584 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:09 crc kubenswrapper[4725]: E1002 11:30:09.267955 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:10 crc kubenswrapper[4725]: I1002 11:30:10.267941 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:10 crc kubenswrapper[4725]: I1002 11:30:10.268049 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:10 crc kubenswrapper[4725]: E1002 11:30:10.268273 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:10 crc kubenswrapper[4725]: E1002 11:30:10.268467 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:11 crc kubenswrapper[4725]: I1002 11:30:11.268088 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:11 crc kubenswrapper[4725]: I1002 11:30:11.268104 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:11 crc kubenswrapper[4725]: E1002 11:30:11.271230 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:11 crc kubenswrapper[4725]: E1002 11:30:11.271458 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:11 crc kubenswrapper[4725]: E1002 11:30:11.414058 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 11:30:12 crc kubenswrapper[4725]: I1002 11:30:12.267827 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:12 crc kubenswrapper[4725]: E1002 11:30:12.268418 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:12 crc kubenswrapper[4725]: I1002 11:30:12.267868 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:12 crc kubenswrapper[4725]: E1002 11:30:12.268668 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:13 crc kubenswrapper[4725]: I1002 11:30:13.267344 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:13 crc kubenswrapper[4725]: I1002 11:30:13.267389 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:13 crc kubenswrapper[4725]: E1002 11:30:13.267628 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:13 crc kubenswrapper[4725]: E1002 11:30:13.267849 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:14 crc kubenswrapper[4725]: I1002 11:30:14.267168 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:14 crc kubenswrapper[4725]: I1002 11:30:14.267266 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:14 crc kubenswrapper[4725]: E1002 11:30:14.267378 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:14 crc kubenswrapper[4725]: E1002 11:30:14.267564 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:15 crc kubenswrapper[4725]: I1002 11:30:15.267271 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:15 crc kubenswrapper[4725]: I1002 11:30:15.267274 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:15 crc kubenswrapper[4725]: E1002 11:30:15.267484 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:15 crc kubenswrapper[4725]: E1002 11:30:15.267572 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:16 crc kubenswrapper[4725]: I1002 11:30:16.267659 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:16 crc kubenswrapper[4725]: I1002 11:30:16.267691 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:16 crc kubenswrapper[4725]: E1002 11:30:16.267919 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:16 crc kubenswrapper[4725]: E1002 11:30:16.268036 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:16 crc kubenswrapper[4725]: E1002 11:30:16.415865 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 11:30:17 crc kubenswrapper[4725]: I1002 11:30:17.267433 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:17 crc kubenswrapper[4725]: E1002 11:30:17.267604 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:17 crc kubenswrapper[4725]: I1002 11:30:17.267433 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:17 crc kubenswrapper[4725]: E1002 11:30:17.267952 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:18 crc kubenswrapper[4725]: I1002 11:30:18.267070 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:18 crc kubenswrapper[4725]: I1002 11:30:18.267233 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:18 crc kubenswrapper[4725]: E1002 11:30:18.267408 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:18 crc kubenswrapper[4725]: E1002 11:30:18.267521 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:19 crc kubenswrapper[4725]: I1002 11:30:19.268072 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:19 crc kubenswrapper[4725]: I1002 11:30:19.268131 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:19 crc kubenswrapper[4725]: E1002 11:30:19.268218 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:19 crc kubenswrapper[4725]: E1002 11:30:19.268514 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:19 crc kubenswrapper[4725]: I1002 11:30:19.268576 4725 scope.go:117] "RemoveContainer" containerID="81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e" Oct 02 11:30:20 crc kubenswrapper[4725]: I1002 11:30:20.244545 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2q2jl_15fc62f2-0a7e-477c-8e35-0888c40e2d6c/kube-multus/1.log" Oct 02 11:30:20 crc kubenswrapper[4725]: I1002 11:30:20.244840 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2q2jl" event={"ID":"15fc62f2-0a7e-477c-8e35-0888c40e2d6c","Type":"ContainerStarted","Data":"e96bdcb9d80476d1369450fc2610306d82c9ba2ad06315db71d6c29ff0f76cb8"} Oct 02 11:30:20 crc kubenswrapper[4725]: I1002 11:30:20.267939 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:20 crc kubenswrapper[4725]: I1002 11:30:20.268048 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:20 crc kubenswrapper[4725]: E1002 11:30:20.268863 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:20 crc kubenswrapper[4725]: E1002 11:30:20.268597 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:20 crc kubenswrapper[4725]: I1002 11:30:20.268929 4725 scope.go:117] "RemoveContainer" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:30:21 crc kubenswrapper[4725]: I1002 11:30:21.251431 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/3.log" Oct 02 11:30:21 crc kubenswrapper[4725]: I1002 11:30:21.255942 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerStarted","Data":"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92"} Oct 02 11:30:21 crc kubenswrapper[4725]: I1002 11:30:21.256571 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:30:21 crc kubenswrapper[4725]: I1002 11:30:21.267933 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:21 crc kubenswrapper[4725]: I1002 11:30:21.268006 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:21 crc kubenswrapper[4725]: E1002 11:30:21.268126 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:21 crc kubenswrapper[4725]: E1002 11:30:21.268260 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:21 crc kubenswrapper[4725]: I1002 11:30:21.312084 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zxhp4"] Oct 02 11:30:21 crc kubenswrapper[4725]: I1002 11:30:21.315883 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podStartSLOduration=111.315854797 podStartE2EDuration="1m51.315854797s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:21.314287303 +0000 UTC m=+141.221786836" watchObservedRunningTime="2025-10-02 11:30:21.315854797 +0000 UTC m=+141.223354300" Oct 02 11:30:21 crc kubenswrapper[4725]: E1002 11:30:21.416392 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 11:30:22 crc kubenswrapper[4725]: I1002 11:30:22.262095 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:22 crc kubenswrapper[4725]: E1002 11:30:22.262838 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:22 crc kubenswrapper[4725]: I1002 11:30:22.267914 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:22 crc kubenswrapper[4725]: I1002 11:30:22.268014 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:22 crc kubenswrapper[4725]: E1002 11:30:22.268070 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:22 crc kubenswrapper[4725]: E1002 11:30:22.268096 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:23 crc kubenswrapper[4725]: I1002 11:30:23.267329 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:23 crc kubenswrapper[4725]: E1002 11:30:23.267457 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:24 crc kubenswrapper[4725]: I1002 11:30:24.266879 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:24 crc kubenswrapper[4725]: E1002 11:30:24.267007 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:24 crc kubenswrapper[4725]: I1002 11:30:24.267171 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:24 crc kubenswrapper[4725]: I1002 11:30:24.267202 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:24 crc kubenswrapper[4725]: E1002 11:30:24.267270 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:24 crc kubenswrapper[4725]: E1002 11:30:24.267449 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:25 crc kubenswrapper[4725]: I1002 11:30:25.269075 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:25 crc kubenswrapper[4725]: E1002 11:30:25.270370 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 11:30:26 crc kubenswrapper[4725]: I1002 11:30:26.267166 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:26 crc kubenswrapper[4725]: I1002 11:30:26.267207 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:26 crc kubenswrapper[4725]: E1002 11:30:26.267299 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zxhp4" podUID="a6af8c70-d2e8-4891-bf65-1deb3fb02044" Oct 02 11:30:26 crc kubenswrapper[4725]: E1002 11:30:26.267452 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 11:30:26 crc kubenswrapper[4725]: I1002 11:30:26.267858 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:26 crc kubenswrapper[4725]: E1002 11:30:26.267954 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 11:30:27 crc kubenswrapper[4725]: I1002 11:30:27.267845 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:27 crc kubenswrapper[4725]: I1002 11:30:27.270369 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 02 11:30:27 crc kubenswrapper[4725]: I1002 11:30:27.270753 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 02 11:30:28 crc kubenswrapper[4725]: I1002 11:30:28.267691 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:28 crc kubenswrapper[4725]: I1002 11:30:28.267901 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:28 crc kubenswrapper[4725]: I1002 11:30:28.268039 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:28 crc kubenswrapper[4725]: I1002 11:30:28.270884 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 02 11:30:28 crc kubenswrapper[4725]: I1002 11:30:28.271582 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 02 11:30:28 crc kubenswrapper[4725]: I1002 11:30:28.272012 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 02 11:30:28 crc kubenswrapper[4725]: I1002 11:30:28.272176 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 02 11:30:28 crc kubenswrapper[4725]: I1002 11:30:28.605515 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.221515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.287907 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m285b"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.288172 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-vdrxk"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.288319 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.288517 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.288610 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.288686 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vdrxk" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.288552 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.289289 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.289670 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.300818 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9tv4\" (UniqueName: \"kubernetes.io/projected/d12f8c4f-905c-419a-bbfe-9ccda55d9b02-kube-api-access-w9tv4\") pod \"cluster-samples-operator-665b6dd947-jvcbp\" (UID: \"d12f8c4f-905c-419a-bbfe-9ccda55d9b02\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.300889 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/958f4455-ed96-4896-b03c-dec837e33311-serving-cert\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.300915 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1968b2cb-6f43-43b1-a204-f99594ea8a1b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dmfx8\" (UID: \"1968b2cb-6f43-43b1-a204-f99594ea8a1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.300938 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-config\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.300960 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/74728246-d368-49cd-b41d-127de5ef0e1b-machine-approver-tls\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.300981 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.301004 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-client-ca\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.301023 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74728246-d368-49cd-b41d-127de5ef0e1b-config\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.301045 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1968b2cb-6f43-43b1-a204-f99594ea8a1b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dmfx8\" (UID: \"1968b2cb-6f43-43b1-a204-f99594ea8a1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.301061 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/74728246-d368-49cd-b41d-127de5ef0e1b-auth-proxy-config\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.301082 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d12f8c4f-905c-419a-bbfe-9ccda55d9b02-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jvcbp\" (UID: \"d12f8c4f-905c-419a-bbfe-9ccda55d9b02\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.301101 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42l6r\" (UniqueName: \"kubernetes.io/projected/958f4455-ed96-4896-b03c-dec837e33311-kube-api-access-42l6r\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.301154 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbjsz\" (UniqueName: \"kubernetes.io/projected/1968b2cb-6f43-43b1-a204-f99594ea8a1b-kube-api-access-xbjsz\") pod \"openshift-apiserver-operator-796bbdcf4f-dmfx8\" (UID: \"1968b2cb-6f43-43b1-a204-f99594ea8a1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.301182 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7swdb\" (UniqueName: \"kubernetes.io/projected/74728246-d368-49cd-b41d-127de5ef0e1b-kube-api-access-7swdb\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.301244 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jksln\" (UniqueName: \"kubernetes.io/projected/ac6f43e5-03b3-49a8-9e46-7c607c06f40c-kube-api-access-jksln\") pod \"downloads-7954f5f757-vdrxk\" (UID: \"ac6f43e5-03b3-49a8-9e46-7c607c06f40c\") " pod="openshift-console/downloads-7954f5f757-vdrxk" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.304576 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.304955 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.305161 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.307019 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.307088 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.307235 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.307466 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.353108 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g6fnf"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.353747 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.354929 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ps6mz"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.355238 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-kq4vt"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.355505 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.355650 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.356304 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.356912 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.357203 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.357370 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.357381 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.357556 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.357611 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.357697 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.357833 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.358244 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.359804 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.359911 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.359994 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.360074 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.360134 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.360692 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vxnwq"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.360884 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.361038 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.361318 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.361550 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.361592 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.362009 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.362054 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.362124 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.362183 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.362199 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.362765 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7mdf5"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.363006 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.363429 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.363971 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.364545 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.364671 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.368864 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.369642 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.369773 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.369811 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.369873 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.369962 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.369787 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.370057 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.370226 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.370341 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.370522 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.370632 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.370781 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.370108 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.371081 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.371226 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.372284 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.373348 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.374823 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.374998 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.375153 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.375433 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2sz4f"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.375964 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x5npd"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.376213 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.376262 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p4dgg"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.376382 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.376659 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.377331 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.377597 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.378582 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.379100 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4554v"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.379532 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.379805 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.380238 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.380443 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.380573 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.380789 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.382792 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.383768 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.388596 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.388647 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.389060 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.394391 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.395317 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.395770 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.395802 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.396330 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.396441 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.396544 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.396899 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.397568 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.398028 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.398309 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.398360 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.400326 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.401683 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.401809 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.401985 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.401831 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.402189 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.402397 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.406057 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.416979 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.417124 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.417514 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.417617 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.417697 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.417809 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418028 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418511 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-config\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418543 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/74728246-d368-49cd-b41d-127de5ef0e1b-machine-approver-tls\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418566 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418590 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-client-ca\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418611 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74728246-d368-49cd-b41d-127de5ef0e1b-config\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418630 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1968b2cb-6f43-43b1-a204-f99594ea8a1b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dmfx8\" (UID: \"1968b2cb-6f43-43b1-a204-f99594ea8a1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418647 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/74728246-d368-49cd-b41d-127de5ef0e1b-auth-proxy-config\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d12f8c4f-905c-419a-bbfe-9ccda55d9b02-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jvcbp\" (UID: \"d12f8c4f-905c-419a-bbfe-9ccda55d9b02\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418691 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42l6r\" (UniqueName: \"kubernetes.io/projected/958f4455-ed96-4896-b03c-dec837e33311-kube-api-access-42l6r\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418743 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbjsz\" (UniqueName: \"kubernetes.io/projected/1968b2cb-6f43-43b1-a204-f99594ea8a1b-kube-api-access-xbjsz\") pod \"openshift-apiserver-operator-796bbdcf4f-dmfx8\" (UID: \"1968b2cb-6f43-43b1-a204-f99594ea8a1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418758 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7swdb\" (UniqueName: \"kubernetes.io/projected/74728246-d368-49cd-b41d-127de5ef0e1b-kube-api-access-7swdb\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418791 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jksln\" (UniqueName: \"kubernetes.io/projected/ac6f43e5-03b3-49a8-9e46-7c607c06f40c-kube-api-access-jksln\") pod \"downloads-7954f5f757-vdrxk\" (UID: \"ac6f43e5-03b3-49a8-9e46-7c607c06f40c\") " pod="openshift-console/downloads-7954f5f757-vdrxk" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418809 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9tv4\" (UniqueName: \"kubernetes.io/projected/d12f8c4f-905c-419a-bbfe-9ccda55d9b02-kube-api-access-w9tv4\") pod \"cluster-samples-operator-665b6dd947-jvcbp\" (UID: \"d12f8c4f-905c-419a-bbfe-9ccda55d9b02\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418832 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/958f4455-ed96-4896-b03c-dec837e33311-serving-cert\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.418859 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1968b2cb-6f43-43b1-a204-f99594ea8a1b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dmfx8\" (UID: \"1968b2cb-6f43-43b1-a204-f99594ea8a1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.419527 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1968b2cb-6f43-43b1-a204-f99594ea8a1b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dmfx8\" (UID: \"1968b2cb-6f43-43b1-a204-f99594ea8a1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.419982 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/74728246-d368-49cd-b41d-127de5ef0e1b-auth-proxy-config\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.422521 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.422671 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-smwg9"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.423323 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.423684 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.424033 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.424336 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.424471 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.424658 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.425002 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1968b2cb-6f43-43b1-a204-f99594ea8a1b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dmfx8\" (UID: \"1968b2cb-6f43-43b1-a204-f99594ea8a1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.425055 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.425228 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.425372 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.425477 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.425980 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d12f8c4f-905c-419a-bbfe-9ccda55d9b02-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jvcbp\" (UID: \"d12f8c4f-905c-419a-bbfe-9ccda55d9b02\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.426511 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-config\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.427001 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.427150 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.427978 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.428085 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.428191 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.429346 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/958f4455-ed96-4896-b03c-dec837e33311-serving-cert\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.429608 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.431245 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74728246-d368-49cd-b41d-127de5ef0e1b-config\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.432764 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.433514 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-client-ca\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.433679 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/74728246-d368-49cd-b41d-127de5ef0e1b-machine-approver-tls\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.434294 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.435366 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5sv8"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.435919 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.436647 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.436823 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.437106 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.437135 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.437548 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.437624 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.437853 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.438183 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.438913 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m285b"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.439888 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.440408 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.440942 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vdrxk"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.442007 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.442420 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.442589 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.442689 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.446078 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.452929 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.454116 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.483364 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.484004 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.484029 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.489441 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7v5h8"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.490352 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.490368 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.491866 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.495673 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.495889 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.497032 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.497258 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.497530 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.498332 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.501844 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dxw2d"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.502961 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.507932 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519479 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-etcd-serving-ca\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519515 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knmqd\" (UniqueName: \"kubernetes.io/projected/e5b9a719-adef-4213-b931-3f20d44b90b7-kube-api-access-knmqd\") pod \"dns-operator-744455d44c-vxnwq\" (UID: \"e5b9a719-adef-4213-b931-3f20d44b90b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519535 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8fbda729-20cb-4a89-9295-da8fb53f7136-metrics-tls\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519553 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ppgh\" (UniqueName: \"kubernetes.io/projected/0361af9a-1b83-4972-b03c-a718779bc05a-kube-api-access-5ppgh\") pod \"openshift-controller-manager-operator-756b6f6bc6-bm5qf\" (UID: \"0361af9a-1b83-4972-b03c-a718779bc05a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519572 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p8r2\" (UniqueName: \"kubernetes.io/projected/b79b9e09-2453-4a58-af84-0732c5f7892d-kube-api-access-6p8r2\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519586 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8acc906c-1b87-4f44-b20e-c4ab1e8474a7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r647p\" (UID: \"8acc906c-1b87-4f44-b20e-c4ab1e8474a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519600 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0940542-0c18-472b-8fe9-2363f88ec264-serving-cert\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519616 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cphh\" (UniqueName: \"kubernetes.io/projected/910e1016-708f-4940-9a30-c949c8e58b54-kube-api-access-8cphh\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519636 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/37af674e-88c7-4b76-9a74-371e60757f7c-etcd-service-ca\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519652 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbm6b\" (UniqueName: \"kubernetes.io/projected/e96f50b0-1c58-4c09-8554-16c1104a7298-kube-api-access-jbm6b\") pod \"openshift-config-operator-7777fb866f-wgm9g\" (UID: \"e96f50b0-1c58-4c09-8554-16c1104a7298\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519667 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-service-ca-bundle\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519682 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01689a82-0e94-4995-a494-6c8bc2116e93-config\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519697 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bzlv\" (UniqueName: \"kubernetes.io/projected/37af674e-88c7-4b76-9a74-371e60757f7c-kube-api-access-6bzlv\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519884 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519923 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-etcd-client\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519949 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6jrw\" (UniqueName: \"kubernetes.io/projected/8fbda729-20cb-4a89-9295-da8fb53f7136-kube-api-access-r6jrw\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.519987 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0940542-0c18-472b-8fe9-2363f88ec264-audit-policies\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520007 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e96f50b0-1c58-4c09-8554-16c1104a7298-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wgm9g\" (UID: \"e96f50b0-1c58-4c09-8554-16c1104a7298\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520044 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520071 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-config\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520091 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45a64c58-e326-4e39-a87b-94bf31b48c9d-proxy-tls\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520112 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8fbda729-20cb-4a89-9295-da8fb53f7136-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520150 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc349021-6211-4cf4-9ec7-d50a5f9814bb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tg727\" (UID: \"dc349021-6211-4cf4-9ec7-d50a5f9814bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520175 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzwp7\" (UniqueName: \"kubernetes.io/projected/01689a82-0e94-4995-a494-6c8bc2116e93-kube-api-access-nzwp7\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520198 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520224 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45a64c58-e326-4e39-a87b-94bf31b48c9d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520245 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-config\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520268 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01689a82-0e94-4995-a494-6c8bc2116e93-trusted-ca\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520287 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/910e1016-708f-4940-9a30-c949c8e58b54-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520308 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45a64c58-e326-4e39-a87b-94bf31b48c9d-images\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520330 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/910e1016-708f-4940-9a30-c949c8e58b54-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520360 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qldt\" (UniqueName: \"kubernetes.io/projected/c36f3900-450a-437f-9fda-b3c7ccf6b4be-kube-api-access-7qldt\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520381 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0361af9a-1b83-4972-b03c-a718779bc05a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bm5qf\" (UID: \"0361af9a-1b83-4972-b03c-a718779bc05a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520404 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfpc\" (UniqueName: \"kubernetes.io/projected/dc349021-6211-4cf4-9ec7-d50a5f9814bb-kube-api-access-5dfpc\") pod \"kube-storage-version-migrator-operator-b67b599dd-tg727\" (UID: \"dc349021-6211-4cf4-9ec7-d50a5f9814bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520426 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/910e1016-708f-4940-9a30-c949c8e58b54-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520446 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-policies\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520481 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26jmc\" (UniqueName: \"kubernetes.io/projected/45a64c58-e326-4e39-a87b-94bf31b48c9d-kube-api-access-26jmc\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520501 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jqx5\" (UniqueName: \"kubernetes.io/projected/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-kube-api-access-5jqx5\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520521 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520541 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fc53fbf-3f2c-41c2-be09-43be29dc3865-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vwx75\" (UID: \"6fc53fbf-3f2c-41c2-be09-43be29dc3865\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520563 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37af674e-88c7-4b76-9a74-371e60757f7c-etcd-client\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520585 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fc53fbf-3f2c-41c2-be09-43be29dc3865-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vwx75\" (UID: \"6fc53fbf-3f2c-41c2-be09-43be29dc3865\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520656 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgt86\" (UniqueName: \"kubernetes.io/projected/b854d5e6-ba97-4f14-be97-49c1b0151d93-kube-api-access-pgt86\") pod \"migrator-59844c95c7-8x9gx\" (UID: \"b854d5e6-ba97-4f14-be97-49c1b0151d93\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520681 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520701 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-audit\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520749 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-audit-dir\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520770 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-oauth-serving-cert\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520806 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-node-pullsecrets\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520827 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-trusted-ca-bundle\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520880 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01689a82-0e94-4995-a494-6c8bc2116e93-serving-cert\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520901 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37af674e-88c7-4b76-9a74-371e60757f7c-config\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520947 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520973 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-encryption-config\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.520993 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0940542-0c18-472b-8fe9-2363f88ec264-encryption-config\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521019 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521081 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c36f3900-450a-437f-9fda-b3c7ccf6b4be-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521104 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e96f50b0-1c58-4c09-8554-16c1104a7298-serving-cert\") pod \"openshift-config-operator-7777fb866f-wgm9g\" (UID: \"e96f50b0-1c58-4c09-8554-16c1104a7298\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521127 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzw54\" (UniqueName: \"kubernetes.io/projected/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-kube-api-access-xzw54\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521446 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acc906c-1b87-4f44-b20e-c4ab1e8474a7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r647p\" (UID: \"8acc906c-1b87-4f44-b20e-c4ab1e8474a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521478 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0940542-0c18-472b-8fe9-2363f88ec264-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521515 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c36f3900-450a-437f-9fda-b3c7ccf6b4be-images\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521541 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-dir\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521560 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521589 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sdhg\" (UniqueName: \"kubernetes.io/projected/a0940542-0c18-472b-8fe9-2363f88ec264-kube-api-access-2sdhg\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521608 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/37af674e-88c7-4b76-9a74-371e60757f7c-etcd-ca\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521649 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8acc906c-1b87-4f44-b20e-c4ab1e8474a7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r647p\" (UID: \"8acc906c-1b87-4f44-b20e-c4ab1e8474a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521679 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0940542-0c18-472b-8fe9-2363f88ec264-etcd-client\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521734 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-service-ca\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.521765 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-serving-cert\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522021 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-oauth-config\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522095 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fc53fbf-3f2c-41c2-be09-43be29dc3865-config\") pod \"kube-apiserver-operator-766d6c64bb-vwx75\" (UID: \"6fc53fbf-3f2c-41c2-be09-43be29dc3865\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522137 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z598k\" (UniqueName: \"kubernetes.io/projected/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-kube-api-access-z598k\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522206 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36f3900-450a-437f-9fda-b3c7ccf6b4be-config\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522240 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-serving-cert\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522263 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e5b9a719-adef-4213-b931-3f20d44b90b7-metrics-tls\") pod \"dns-operator-744455d44c-vxnwq\" (UID: \"e5b9a719-adef-4213-b931-3f20d44b90b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522308 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522429 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0940542-0c18-472b-8fe9-2363f88ec264-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522471 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522501 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522529 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522553 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522582 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0940542-0c18-472b-8fe9-2363f88ec264-audit-dir\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522614 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb49g\" (UniqueName: \"kubernetes.io/projected/e1135f15-46ec-4d45-8167-d810903ee497-kube-api-access-rb49g\") pod \"multus-admission-controller-857f4d67dd-smwg9\" (UID: \"e1135f15-46ec-4d45-8167-d810903ee497\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522700 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37af674e-88c7-4b76-9a74-371e60757f7c-serving-cert\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522858 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0361af9a-1b83-4972-b03c-a718779bc05a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bm5qf\" (UID: \"0361af9a-1b83-4972-b03c-a718779bc05a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522907 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-config\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.522939 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e1135f15-46ec-4d45-8167-d810903ee497-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-smwg9\" (UID: \"e1135f15-46ec-4d45-8167-d810903ee497\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.523241 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8fbda729-20cb-4a89-9295-da8fb53f7136-trusted-ca\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.523295 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-serving-cert\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.523320 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc349021-6211-4cf4-9ec7-d50a5f9814bb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tg727\" (UID: \"dc349021-6211-4cf4-9ec7-d50a5f9814bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.523371 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-image-import-ca\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.524789 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.525980 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bstp2"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.525450 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.525052 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.526746 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.528355 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.528388 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.529301 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.530090 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.530321 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99xdm"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.530597 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.530803 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lpd8k"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.531560 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533215 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p4dgg"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533246 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ps6mz"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533259 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533270 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533283 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4554v"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533296 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vxnwq"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533307 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533319 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533330 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g6fnf"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533341 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533351 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.533426 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lpd8k" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.534489 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.538773 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5sv8"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.541413 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fcfzl"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.542341 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.543441 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7mdf5"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.545207 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x5npd"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.547457 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7v5h8"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.549066 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.549949 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.550441 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.551477 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.552525 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kq4vt"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.553955 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bstp2"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.555287 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.556131 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.559042 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-smwg9"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.566628 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2sz4f"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.573363 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fcfzl"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.574193 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.575613 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.581531 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99xdm"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.583431 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.584604 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.585630 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.586976 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.588272 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tmvx6"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.590043 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.590255 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pzz9t"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.590595 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.593780 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pzz9t"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.593885 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pzz9t" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.595149 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tmvx6"] Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.611010 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624287 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftfz2\" (UniqueName: \"kubernetes.io/projected/41437658-6209-4ba0-8707-dc873b07d0f3-kube-api-access-ftfz2\") pod \"service-ca-operator-777779d784-bstp2\" (UID: \"41437658-6209-4ba0-8707-dc873b07d0f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624321 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-serving-cert\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624340 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624357 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-default-certificate\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624388 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36f3900-450a-437f-9fda-b3c7ccf6b4be-config\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624411 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624443 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mhx4\" (UniqueName: \"kubernetes.io/projected/c1db9267-244a-40a4-ae74-8ce562f97a4c-kube-api-access-8mhx4\") pod \"machine-config-controller-84d6567774-vj4zt\" (UID: \"c1db9267-244a-40a4-ae74-8ce562f97a4c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624469 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0940542-0c18-472b-8fe9-2363f88ec264-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624493 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624513 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624539 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/825215a6-1ebc-426c-b54d-f54f6c261f55-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hrs6\" (UID: \"825215a6-1ebc-426c-b54d-f54f6c261f55\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624562 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhzmj\" (UniqueName: \"kubernetes.io/projected/9a84f459-c277-472f-a84c-328e8523f8e0-kube-api-access-mhzmj\") pod \"marketplace-operator-79b997595-m5sv8\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624584 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37af674e-88c7-4b76-9a74-371e60757f7c-serving-cert\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624605 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-config\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624628 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-serving-cert\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624653 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc349021-6211-4cf4-9ec7-d50a5f9814bb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tg727\" (UID: \"dc349021-6211-4cf4-9ec7-d50a5f9814bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624680 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-image-import-ca\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624701 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-etcd-serving-ca\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624738 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knmqd\" (UniqueName: \"kubernetes.io/projected/e5b9a719-adef-4213-b931-3f20d44b90b7-kube-api-access-knmqd\") pod \"dns-operator-744455d44c-vxnwq\" (UID: \"e5b9a719-adef-4213-b931-3f20d44b90b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624778 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0940542-0c18-472b-8fe9-2363f88ec264-serving-cert\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624800 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01689a82-0e94-4995-a494-6c8bc2116e93-config\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624824 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41437658-6209-4ba0-8707-dc873b07d0f3-serving-cert\") pod \"service-ca-operator-777779d784-bstp2\" (UID: \"41437658-6209-4ba0-8707-dc873b07d0f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624849 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624876 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e96f50b0-1c58-4c09-8554-16c1104a7298-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wgm9g\" (UID: \"e96f50b0-1c58-4c09-8554-16c1104a7298\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624901 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1019f382-13cb-47d6-ae1f-f4ea54bd3008-srv-cert\") pod \"catalog-operator-68c6474976-574ct\" (UID: \"1019f382-13cb-47d6-ae1f-f4ea54bd3008\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624925 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0940542-0c18-472b-8fe9-2363f88ec264-audit-policies\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624948 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624969 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45a64c58-e326-4e39-a87b-94bf31b48c9d-proxy-tls\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.624989 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8fbda729-20cb-4a89-9295-da8fb53f7136-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625010 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45a64c58-e326-4e39-a87b-94bf31b48c9d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625034 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc349021-6211-4cf4-9ec7-d50a5f9814bb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tg727\" (UID: \"dc349021-6211-4cf4-9ec7-d50a5f9814bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625059 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625083 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45a64c58-e326-4e39-a87b-94bf31b48c9d-images\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625107 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01689a82-0e94-4995-a494-6c8bc2116e93-trusted-ca\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625130 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dfpc\" (UniqueName: \"kubernetes.io/projected/dc349021-6211-4cf4-9ec7-d50a5f9814bb-kube-api-access-5dfpc\") pod \"kube-storage-version-migrator-operator-b67b599dd-tg727\" (UID: \"dc349021-6211-4cf4-9ec7-d50a5f9814bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625153 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/910e1016-708f-4940-9a30-c949c8e58b54-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625177 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625219 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fc53fbf-3f2c-41c2-be09-43be29dc3865-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vwx75\" (UID: \"6fc53fbf-3f2c-41c2-be09-43be29dc3865\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625243 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jqx5\" (UniqueName: \"kubernetes.io/projected/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-kube-api-access-5jqx5\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625268 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgt86\" (UniqueName: \"kubernetes.io/projected/b854d5e6-ba97-4f14-be97-49c1b0151d93-kube-api-access-pgt86\") pod \"migrator-59844c95c7-8x9gx\" (UID: \"b854d5e6-ba97-4f14-be97-49c1b0151d93\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625292 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d3648e0c-583a-44a3-8b06-3d1b1ae1491b-signing-key\") pod \"service-ca-9c57cc56f-7v5h8\" (UID: \"d3648e0c-583a-44a3-8b06-3d1b1ae1491b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625290 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0940542-0c18-472b-8fe9-2363f88ec264-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625315 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37af674e-88c7-4b76-9a74-371e60757f7c-etcd-client\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625321 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36f3900-450a-437f-9fda-b3c7ccf6b4be-config\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625362 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-oauth-serving-cert\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625410 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-audit\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625433 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-audit-dir\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625445 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-config\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625457 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1019f382-13cb-47d6-ae1f-f4ea54bd3008-profile-collector-cert\") pod \"catalog-operator-68c6474976-574ct\" (UID: \"1019f382-13cb-47d6-ae1f-f4ea54bd3008\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625496 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-node-pullsecrets\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625516 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-trusted-ca-bundle\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625535 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538365c3-9ba5-4fbd-aca1-05525f5a3250-config-volume\") pod \"collect-profiles-29323410-c7zck\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625555 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-apiservice-cert\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625580 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7abb266f-c1ef-43e3-9aef-213be819dc8e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9dnp4\" (UID: \"7abb266f-c1ef-43e3-9aef-213be819dc8e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625606 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01689a82-0e94-4995-a494-6c8bc2116e93-serving-cert\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625627 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37af674e-88c7-4b76-9a74-371e60757f7c-config\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625648 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0940542-0c18-472b-8fe9-2363f88ec264-encryption-config\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625669 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625693 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-profile-collector-cert\") pod \"olm-operator-6b444d44fb-l6fgp\" (UID: \"f3ff8f8f-8453-4b51-83fa-9aeda104fbff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625699 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-audit-dir\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625699 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-node-pullsecrets\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.625714 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7abb266f-c1ef-43e3-9aef-213be819dc8e-config\") pod \"kube-controller-manager-operator-78b949d7b-9dnp4\" (UID: \"7abb266f-c1ef-43e3-9aef-213be819dc8e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.626430 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.626467 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-audit\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.626594 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-etcd-serving-ca\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.626706 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-oauth-serving-cert\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.627124 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e96f50b0-1c58-4c09-8554-16c1104a7298-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wgm9g\" (UID: \"e96f50b0-1c58-4c09-8554-16c1104a7298\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.627677 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0940542-0c18-472b-8fe9-2363f88ec264-audit-policies\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.627761 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45a64c58-e326-4e39-a87b-94bf31b48c9d-images\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.627783 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45a64c58-e326-4e39-a87b-94bf31b48c9d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.627861 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c36f3900-450a-437f-9fda-b3c7ccf6b4be-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.627898 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-webhook-cert\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.627927 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sdhg\" (UniqueName: \"kubernetes.io/projected/a0940542-0c18-472b-8fe9-2363f88ec264-kube-api-access-2sdhg\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.627950 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/37af674e-88c7-4b76-9a74-371e60757f7c-etcd-ca\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628031 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628068 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0940542-0c18-472b-8fe9-2363f88ec264-etcd-client\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628094 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-service-ca\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628118 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-oauth-config\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628145 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgv4v\" (UniqueName: \"kubernetes.io/projected/538365c3-9ba5-4fbd-aca1-05525f5a3250-kube-api-access-bgv4v\") pod \"collect-profiles-29323410-c7zck\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628180 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fc53fbf-3f2c-41c2-be09-43be29dc3865-config\") pod \"kube-apiserver-operator-766d6c64bb-vwx75\" (UID: \"6fc53fbf-3f2c-41c2-be09-43be29dc3865\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628205 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z598k\" (UniqueName: \"kubernetes.io/projected/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-kube-api-access-z598k\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628328 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37af674e-88c7-4b76-9a74-371e60757f7c-config\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628341 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e5b9a719-adef-4213-b931-3f20d44b90b7-metrics-tls\") pod \"dns-operator-744455d44c-vxnwq\" (UID: \"e5b9a719-adef-4213-b931-3f20d44b90b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628364 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628389 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlzm\" (UniqueName: \"kubernetes.io/projected/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-kube-api-access-cqlzm\") pod \"olm-operator-6b444d44fb-l6fgp\" (UID: \"f3ff8f8f-8453-4b51-83fa-9aeda104fbff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628417 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1db9267-244a-40a4-ae74-8ce562f97a4c-proxy-tls\") pod \"machine-config-controller-84d6567774-vj4zt\" (UID: \"c1db9267-244a-40a4-ae74-8ce562f97a4c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628440 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0940542-0c18-472b-8fe9-2363f88ec264-audit-dir\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628461 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb49g\" (UniqueName: \"kubernetes.io/projected/e1135f15-46ec-4d45-8167-d810903ee497-kube-api-access-rb49g\") pod \"multus-admission-controller-857f4d67dd-smwg9\" (UID: \"e1135f15-46ec-4d45-8167-d810903ee497\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628481 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0361af9a-1b83-4972-b03c-a718779bc05a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bm5qf\" (UID: \"0361af9a-1b83-4972-b03c-a718779bc05a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628979 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-serving-cert\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.628429 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629029 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0940542-0c18-472b-8fe9-2363f88ec264-audit-dir\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629177 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01689a82-0e94-4995-a494-6c8bc2116e93-config\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629223 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629493 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-trusted-ca-bundle\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629543 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e1135f15-46ec-4d45-8167-d810903ee497-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-smwg9\" (UID: \"e1135f15-46ec-4d45-8167-d810903ee497\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629576 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8fbda729-20cb-4a89-9295-da8fb53f7136-trusted-ca\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629634 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m5sv8\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629650 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01689a82-0e94-4995-a494-6c8bc2116e93-trusted-ca\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629693 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59759\" (UniqueName: \"kubernetes.io/projected/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-kube-api-access-59759\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629765 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8fbda729-20cb-4a89-9295-da8fb53f7136-metrics-tls\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629792 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/538365c3-9ba5-4fbd-aca1-05525f5a3250-secret-volume\") pod \"collect-profiles-29323410-c7zck\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629817 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdmjl\" (UniqueName: \"kubernetes.io/projected/d3648e0c-583a-44a3-8b06-3d1b1ae1491b-kube-api-access-qdmjl\") pod \"service-ca-9c57cc56f-7v5h8\" (UID: \"d3648e0c-583a-44a3-8b06-3d1b1ae1491b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629842 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jbg4\" (UniqueName: \"kubernetes.io/projected/0d430bf0-eef2-4ec2-9941-57c5005e5931-kube-api-access-4jbg4\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629872 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ppgh\" (UniqueName: \"kubernetes.io/projected/0361af9a-1b83-4972-b03c-a718779bc05a-kube-api-access-5ppgh\") pod \"openshift-controller-manager-operator-756b6f6bc6-bm5qf\" (UID: \"0361af9a-1b83-4972-b03c-a718779bc05a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629901 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-srv-cert\") pod \"olm-operator-6b444d44fb-l6fgp\" (UID: \"f3ff8f8f-8453-4b51-83fa-9aeda104fbff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629932 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p8r2\" (UniqueName: \"kubernetes.io/projected/b79b9e09-2453-4a58-af84-0732c5f7892d-kube-api-access-6p8r2\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629947 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629960 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8acc906c-1b87-4f44-b20e-c4ab1e8474a7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r647p\" (UID: \"8acc906c-1b87-4f44-b20e-c4ab1e8474a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.629988 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1db9267-244a-40a4-ae74-8ce562f97a4c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vj4zt\" (UID: \"c1db9267-244a-40a4-ae74-8ce562f97a4c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630014 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-service-ca-bundle\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630040 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cphh\" (UniqueName: \"kubernetes.io/projected/910e1016-708f-4940-9a30-c949c8e58b54-kube-api-access-8cphh\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630067 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/37af674e-88c7-4b76-9a74-371e60757f7c-etcd-service-ca\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630092 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbm6b\" (UniqueName: \"kubernetes.io/projected/e96f50b0-1c58-4c09-8554-16c1104a7298-kube-api-access-jbm6b\") pod \"openshift-config-operator-7777fb866f-wgm9g\" (UID: \"e96f50b0-1c58-4c09-8554-16c1104a7298\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630442 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-service-ca\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630659 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzlv\" (UniqueName: \"kubernetes.io/projected/37af674e-88c7-4b76-9a74-371e60757f7c-kube-api-access-6bzlv\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630703 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmkq\" (UniqueName: \"kubernetes.io/projected/1019f382-13cb-47d6-ae1f-f4ea54bd3008-kube-api-access-8gmkq\") pod \"catalog-operator-68c6474976-574ct\" (UID: \"1019f382-13cb-47d6-ae1f-f4ea54bd3008\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630750 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6jrw\" (UniqueName: \"kubernetes.io/projected/8fbda729-20cb-4a89-9295-da8fb53f7136-kube-api-access-r6jrw\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630779 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d3648e0c-583a-44a3-8b06-3d1b1ae1491b-signing-cabundle\") pod \"service-ca-9c57cc56f-7v5h8\" (UID: \"d3648e0c-583a-44a3-8b06-3d1b1ae1491b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630804 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-etcd-client\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630876 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pjbp\" (UniqueName: \"kubernetes.io/projected/368d8e72-b80a-4336-b454-a73ea5f9a858-kube-api-access-4pjbp\") pod \"package-server-manager-789f6589d5-nkzc6\" (UID: \"368d8e72-b80a-4336-b454-a73ea5f9a858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630909 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-config\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630933 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/368d8e72-b80a-4336-b454-a73ea5f9a858-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nkzc6\" (UID: \"368d8e72-b80a-4336-b454-a73ea5f9a858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.630980 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzwp7\" (UniqueName: \"kubernetes.io/projected/01689a82-0e94-4995-a494-6c8bc2116e93-kube-api-access-nzwp7\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631005 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-config\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631075 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41437658-6209-4ba0-8707-dc873b07d0f3-config\") pod \"service-ca-operator-777779d784-bstp2\" (UID: \"41437658-6209-4ba0-8707-dc873b07d0f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631095 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/910e1016-708f-4940-9a30-c949c8e58b54-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631115 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qldt\" (UniqueName: \"kubernetes.io/projected/c36f3900-450a-437f-9fda-b3c7ccf6b4be-kube-api-access-7qldt\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631133 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0361af9a-1b83-4972-b03c-a718779bc05a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bm5qf\" (UID: \"0361af9a-1b83-4972-b03c-a718779bc05a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631148 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-policies\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631165 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26jmc\" (UniqueName: \"kubernetes.io/projected/45a64c58-e326-4e39-a87b-94bf31b48c9d-kube-api-access-26jmc\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631182 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/910e1016-708f-4940-9a30-c949c8e58b54-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fc53fbf-3f2c-41c2-be09-43be29dc3865-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vwx75\" (UID: \"6fc53fbf-3f2c-41c2-be09-43be29dc3865\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631213 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631235 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7abb266f-c1ef-43e3-9aef-213be819dc8e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9dnp4\" (UID: \"7abb266f-c1ef-43e3-9aef-213be819dc8e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631251 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/37af674e-88c7-4b76-9a74-371e60757f7c-etcd-service-ca\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631705 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-image-import-ca\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.631845 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-config\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.632125 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37af674e-88c7-4b76-9a74-371e60757f7c-etcd-client\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.632134 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-service-ca-bundle\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.632533 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-config\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633038 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633069 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-oauth-config\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633133 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0940542-0c18-472b-8fe9-2363f88ec264-serving-cert\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633404 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45a64c58-e326-4e39-a87b-94bf31b48c9d-proxy-tls\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633474 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-etcd-client\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633521 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633562 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-metrics-certs\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633593 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-service-ca-bundle\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633612 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m5sv8\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633631 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0d430bf0-eef2-4ec2-9941-57c5005e5931-tmpfs\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633666 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633667 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01689a82-0e94-4995-a494-6c8bc2116e93-serving-cert\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633685 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-encryption-config\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633705 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-stats-auth\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633703 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0361af9a-1b83-4972-b03c-a718779bc05a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bm5qf\" (UID: \"0361af9a-1b83-4972-b03c-a718779bc05a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633742 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e96f50b0-1c58-4c09-8554-16c1104a7298-serving-cert\") pod \"openshift-config-operator-7777fb866f-wgm9g\" (UID: \"e96f50b0-1c58-4c09-8554-16c1104a7298\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633763 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzw54\" (UniqueName: \"kubernetes.io/projected/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-kube-api-access-xzw54\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633783 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/910e1016-708f-4940-9a30-c949c8e58b54-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633853 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0940542-0c18-472b-8fe9-2363f88ec264-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.633994 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c36f3900-450a-437f-9fda-b3c7ccf6b4be-images\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.634146 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acc906c-1b87-4f44-b20e-c4ab1e8474a7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r647p\" (UID: \"8acc906c-1b87-4f44-b20e-c4ab1e8474a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.634185 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-dir\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.634146 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-policies\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.634248 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8acc906c-1b87-4f44-b20e-c4ab1e8474a7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r647p\" (UID: \"8acc906c-1b87-4f44-b20e-c4ab1e8474a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.634280 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg8d9\" (UniqueName: \"kubernetes.io/projected/825215a6-1ebc-426c-b54d-f54f6c261f55-kube-api-access-wg8d9\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hrs6\" (UID: \"825215a6-1ebc-426c-b54d-f54f6c261f55\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.634310 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-dir\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.634468 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-serving-cert\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.634788 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8fbda729-20cb-4a89-9295-da8fb53f7136-metrics-tls\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.634801 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.634849 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/910e1016-708f-4940-9a30-c949c8e58b54-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.634921 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.635056 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0940542-0c18-472b-8fe9-2363f88ec264-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.635352 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.635375 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.635383 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37af674e-88c7-4b76-9a74-371e60757f7c-serving-cert\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.635416 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c36f3900-450a-437f-9fda-b3c7ccf6b4be-images\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.635748 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-serving-cert\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.635997 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.636123 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.636178 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0940542-0c18-472b-8fe9-2363f88ec264-encryption-config\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.636854 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c36f3900-450a-437f-9fda-b3c7ccf6b4be-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.637211 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.637264 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-encryption-config\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.637298 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0940542-0c18-472b-8fe9-2363f88ec264-etcd-client\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.637320 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.637947 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e96f50b0-1c58-4c09-8554-16c1104a7298-serving-cert\") pod \"openshift-config-operator-7777fb866f-wgm9g\" (UID: \"e96f50b0-1c58-4c09-8554-16c1104a7298\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.638137 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0361af9a-1b83-4972-b03c-a718779bc05a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bm5qf\" (UID: \"0361af9a-1b83-4972-b03c-a718779bc05a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.638223 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-serving-cert\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.639182 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/37af674e-88c7-4b76-9a74-371e60757f7c-etcd-ca\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.646770 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e5b9a719-adef-4213-b931-3f20d44b90b7-metrics-tls\") pod \"dns-operator-744455d44c-vxnwq\" (UID: \"e5b9a719-adef-4213-b931-3f20d44b90b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.650755 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.670799 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.698325 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.702572 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8fbda729-20cb-4a89-9295-da8fb53f7136-trusted-ca\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.710452 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.718442 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc349021-6211-4cf4-9ec7-d50a5f9814bb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tg727\" (UID: \"dc349021-6211-4cf4-9ec7-d50a5f9814bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.731358 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736025 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg8d9\" (UniqueName: \"kubernetes.io/projected/825215a6-1ebc-426c-b54d-f54f6c261f55-kube-api-access-wg8d9\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hrs6\" (UID: \"825215a6-1ebc-426c-b54d-f54f6c261f55\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736106 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftfz2\" (UniqueName: \"kubernetes.io/projected/41437658-6209-4ba0-8707-dc873b07d0f3-kube-api-access-ftfz2\") pod \"service-ca-operator-777779d784-bstp2\" (UID: \"41437658-6209-4ba0-8707-dc873b07d0f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736124 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-default-certificate\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mhx4\" (UniqueName: \"kubernetes.io/projected/c1db9267-244a-40a4-ae74-8ce562f97a4c-kube-api-access-8mhx4\") pod \"machine-config-controller-84d6567774-vj4zt\" (UID: \"c1db9267-244a-40a4-ae74-8ce562f97a4c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736295 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/825215a6-1ebc-426c-b54d-f54f6c261f55-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hrs6\" (UID: \"825215a6-1ebc-426c-b54d-f54f6c261f55\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736313 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhzmj\" (UniqueName: \"kubernetes.io/projected/9a84f459-c277-472f-a84c-328e8523f8e0-kube-api-access-mhzmj\") pod \"marketplace-operator-79b997595-m5sv8\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736341 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41437658-6209-4ba0-8707-dc873b07d0f3-serving-cert\") pod \"service-ca-operator-777779d784-bstp2\" (UID: \"41437658-6209-4ba0-8707-dc873b07d0f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736356 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1019f382-13cb-47d6-ae1f-f4ea54bd3008-srv-cert\") pod \"catalog-operator-68c6474976-574ct\" (UID: \"1019f382-13cb-47d6-ae1f-f4ea54bd3008\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736413 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d3648e0c-583a-44a3-8b06-3d1b1ae1491b-signing-key\") pod \"service-ca-9c57cc56f-7v5h8\" (UID: \"d3648e0c-583a-44a3-8b06-3d1b1ae1491b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736432 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1019f382-13cb-47d6-ae1f-f4ea54bd3008-profile-collector-cert\") pod \"catalog-operator-68c6474976-574ct\" (UID: \"1019f382-13cb-47d6-ae1f-f4ea54bd3008\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736452 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7abb266f-c1ef-43e3-9aef-213be819dc8e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9dnp4\" (UID: \"7abb266f-c1ef-43e3-9aef-213be819dc8e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736482 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538365c3-9ba5-4fbd-aca1-05525f5a3250-config-volume\") pod \"collect-profiles-29323410-c7zck\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736499 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-apiservice-cert\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736516 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-profile-collector-cert\") pod \"olm-operator-6b444d44fb-l6fgp\" (UID: \"f3ff8f8f-8453-4b51-83fa-9aeda104fbff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736535 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7abb266f-c1ef-43e3-9aef-213be819dc8e-config\") pod \"kube-controller-manager-operator-78b949d7b-9dnp4\" (UID: \"7abb266f-c1ef-43e3-9aef-213be819dc8e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736555 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-webhook-cert\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736578 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgv4v\" (UniqueName: \"kubernetes.io/projected/538365c3-9ba5-4fbd-aca1-05525f5a3250-kube-api-access-bgv4v\") pod \"collect-profiles-29323410-c7zck\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736623 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqlzm\" (UniqueName: \"kubernetes.io/projected/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-kube-api-access-cqlzm\") pod \"olm-operator-6b444d44fb-l6fgp\" (UID: \"f3ff8f8f-8453-4b51-83fa-9aeda104fbff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736659 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1db9267-244a-40a4-ae74-8ce562f97a4c-proxy-tls\") pod \"machine-config-controller-84d6567774-vj4zt\" (UID: \"c1db9267-244a-40a4-ae74-8ce562f97a4c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736694 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m5sv8\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736712 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59759\" (UniqueName: \"kubernetes.io/projected/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-kube-api-access-59759\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736743 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jbg4\" (UniqueName: \"kubernetes.io/projected/0d430bf0-eef2-4ec2-9941-57c5005e5931-kube-api-access-4jbg4\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736765 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-srv-cert\") pod \"olm-operator-6b444d44fb-l6fgp\" (UID: \"f3ff8f8f-8453-4b51-83fa-9aeda104fbff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/538365c3-9ba5-4fbd-aca1-05525f5a3250-secret-volume\") pod \"collect-profiles-29323410-c7zck\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736793 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdmjl\" (UniqueName: \"kubernetes.io/projected/d3648e0c-583a-44a3-8b06-3d1b1ae1491b-kube-api-access-qdmjl\") pod \"service-ca-9c57cc56f-7v5h8\" (UID: \"d3648e0c-583a-44a3-8b06-3d1b1ae1491b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736824 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1db9267-244a-40a4-ae74-8ce562f97a4c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vj4zt\" (UID: \"c1db9267-244a-40a4-ae74-8ce562f97a4c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.736856 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gmkq\" (UniqueName: \"kubernetes.io/projected/1019f382-13cb-47d6-ae1f-f4ea54bd3008-kube-api-access-8gmkq\") pod \"catalog-operator-68c6474976-574ct\" (UID: \"1019f382-13cb-47d6-ae1f-f4ea54bd3008\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.737365 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d3648e0c-583a-44a3-8b06-3d1b1ae1491b-signing-cabundle\") pod \"service-ca-9c57cc56f-7v5h8\" (UID: \"d3648e0c-583a-44a3-8b06-3d1b1ae1491b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.737390 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pjbp\" (UniqueName: \"kubernetes.io/projected/368d8e72-b80a-4336-b454-a73ea5f9a858-kube-api-access-4pjbp\") pod \"package-server-manager-789f6589d5-nkzc6\" (UID: \"368d8e72-b80a-4336-b454-a73ea5f9a858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.737409 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/368d8e72-b80a-4336-b454-a73ea5f9a858-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nkzc6\" (UID: \"368d8e72-b80a-4336-b454-a73ea5f9a858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.737426 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41437658-6209-4ba0-8707-dc873b07d0f3-config\") pod \"service-ca-operator-777779d784-bstp2\" (UID: \"41437658-6209-4ba0-8707-dc873b07d0f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.737461 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7abb266f-c1ef-43e3-9aef-213be819dc8e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9dnp4\" (UID: \"7abb266f-c1ef-43e3-9aef-213be819dc8e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.737484 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-metrics-certs\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.737501 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m5sv8\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.737585 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0d430bf0-eef2-4ec2-9941-57c5005e5931-tmpfs\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.737604 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-service-ca-bundle\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.737621 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-stats-auth\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.738081 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0d430bf0-eef2-4ec2-9941-57c5005e5931-tmpfs\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.738257 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1db9267-244a-40a4-ae74-8ce562f97a4c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vj4zt\" (UID: \"c1db9267-244a-40a4-ae74-8ce562f97a4c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.750523 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.758901 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc349021-6211-4cf4-9ec7-d50a5f9814bb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tg727\" (UID: \"dc349021-6211-4cf4-9ec7-d50a5f9814bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.771643 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.790522 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.810889 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.830767 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.851570 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.864825 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e1135f15-46ec-4d45-8167-d810903ee497-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-smwg9\" (UID: \"e1135f15-46ec-4d45-8167-d810903ee497\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.870783 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.890751 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.901218 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fc53fbf-3f2c-41c2-be09-43be29dc3865-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vwx75\" (UID: \"6fc53fbf-3f2c-41c2-be09-43be29dc3865\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.911581 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.930467 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.939523 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fc53fbf-3f2c-41c2-be09-43be29dc3865-config\") pod \"kube-apiserver-operator-766d6c64bb-vwx75\" (UID: \"6fc53fbf-3f2c-41c2-be09-43be29dc3865\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.951415 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.958029 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8acc906c-1b87-4f44-b20e-c4ab1e8474a7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r647p\" (UID: \"8acc906c-1b87-4f44-b20e-c4ab1e8474a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.971247 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 02 11:30:33 crc kubenswrapper[4725]: I1002 11:30:33.992491 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.011859 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.015811 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acc906c-1b87-4f44-b20e-c4ab1e8474a7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r647p\" (UID: \"8acc906c-1b87-4f44-b20e-c4ab1e8474a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.032090 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.051552 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.097302 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42l6r\" (UniqueName: \"kubernetes.io/projected/958f4455-ed96-4896-b03c-dec837e33311-kube-api-access-42l6r\") pod \"controller-manager-879f6c89f-m285b\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.109242 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbjsz\" (UniqueName: \"kubernetes.io/projected/1968b2cb-6f43-43b1-a204-f99594ea8a1b-kube-api-access-xbjsz\") pod \"openshift-apiserver-operator-796bbdcf4f-dmfx8\" (UID: \"1968b2cb-6f43-43b1-a204-f99594ea8a1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.126873 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.133454 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7swdb\" (UniqueName: \"kubernetes.io/projected/74728246-d368-49cd-b41d-127de5ef0e1b-kube-api-access-7swdb\") pod \"machine-approver-56656f9798-t55dx\" (UID: \"74728246-d368-49cd-b41d-127de5ef0e1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.146103 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jksln\" (UniqueName: \"kubernetes.io/projected/ac6f43e5-03b3-49a8-9e46-7c607c06f40c-kube-api-access-jksln\") pod \"downloads-7954f5f757-vdrxk\" (UID: \"ac6f43e5-03b3-49a8-9e46-7c607c06f40c\") " pod="openshift-console/downloads-7954f5f757-vdrxk" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.172866 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9tv4\" (UniqueName: \"kubernetes.io/projected/d12f8c4f-905c-419a-bbfe-9ccda55d9b02-kube-api-access-w9tv4\") pod \"cluster-samples-operator-665b6dd947-jvcbp\" (UID: \"d12f8c4f-905c-419a-bbfe-9ccda55d9b02\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.191412 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.212038 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.231331 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.243793 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m5sv8\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.259110 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.270164 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.270437 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.270185 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m5sv8\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.292296 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.293258 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" Oct 02 11:30:34 crc kubenswrapper[4725]: W1002 11:30:34.307930 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74728246_d368_49cd_b41d_127de5ef0e1b.slice/crio-bdfc4803afefebcea8fa1b69a16948830659d319cdc0e9242d6640230d7a46ba WatchSource:0}: Error finding container bdfc4803afefebcea8fa1b69a16948830659d319cdc0e9242d6640230d7a46ba: Status 404 returned error can't find the container with id bdfc4803afefebcea8fa1b69a16948830659d319cdc0e9242d6640230d7a46ba Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.311235 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.321276 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8"] Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.322975 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1db9267-244a-40a4-ae74-8ce562f97a4c-proxy-tls\") pod \"machine-config-controller-84d6567774-vj4zt\" (UID: \"c1db9267-244a-40a4-ae74-8ce562f97a4c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.331673 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 02 11:30:34 crc kubenswrapper[4725]: W1002 11:30:34.333946 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1968b2cb_6f43_43b1_a204_f99594ea8a1b.slice/crio-2f10e9e8b81173c82e7a47cdae1f61f78b153ca16cd98f764dfad4d4382ba799 WatchSource:0}: Error finding container 2f10e9e8b81173c82e7a47cdae1f61f78b153ca16cd98f764dfad4d4382ba799: Status 404 returned error can't find the container with id 2f10e9e8b81173c82e7a47cdae1f61f78b153ca16cd98f764dfad4d4382ba799 Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.340582 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/825215a6-1ebc-426c-b54d-f54f6c261f55-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hrs6\" (UID: \"825215a6-1ebc-426c-b54d-f54f6c261f55\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.351602 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.357842 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" event={"ID":"1968b2cb-6f43-43b1-a204-f99594ea8a1b","Type":"ContainerStarted","Data":"2f10e9e8b81173c82e7a47cdae1f61f78b153ca16cd98f764dfad4d4382ba799"} Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.358834 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" event={"ID":"74728246-d368-49cd-b41d-127de5ef0e1b","Type":"ContainerStarted","Data":"bdfc4803afefebcea8fa1b69a16948830659d319cdc0e9242d6640230d7a46ba"} Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.371063 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.385052 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.390974 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.402026 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vdrxk" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.411228 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.420902 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7abb266f-c1ef-43e3-9aef-213be819dc8e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9dnp4\" (UID: \"7abb266f-c1ef-43e3-9aef-213be819dc8e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.430936 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.437988 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7abb266f-c1ef-43e3-9aef-213be819dc8e-config\") pod \"kube-controller-manager-operator-78b949d7b-9dnp4\" (UID: \"7abb266f-c1ef-43e3-9aef-213be819dc8e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.453985 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.465693 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538365c3-9ba5-4fbd-aca1-05525f5a3250-config-volume\") pod \"collect-profiles-29323410-c7zck\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.467300 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m285b"] Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.474293 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 11:30:34 crc kubenswrapper[4725]: W1002 11:30:34.488682 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod958f4455_ed96_4896_b03c_dec837e33311.slice/crio-8af9b9b48b51e10fe1cd25dd7de2ac4e6efdc4d4730cc495f179f9188ffcb539 WatchSource:0}: Error finding container 8af9b9b48b51e10fe1cd25dd7de2ac4e6efdc4d4730cc495f179f9188ffcb539: Status 404 returned error can't find the container with id 8af9b9b48b51e10fe1cd25dd7de2ac4e6efdc4d4730cc495f179f9188ffcb539 Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.489801 4725 request.go:700] Waited for 1.005971477s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.491449 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.513055 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.531244 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.545207 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-profile-collector-cert\") pod \"olm-operator-6b444d44fb-l6fgp\" (UID: \"f3ff8f8f-8453-4b51-83fa-9aeda104fbff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.547203 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1019f382-13cb-47d6-ae1f-f4ea54bd3008-profile-collector-cert\") pod \"catalog-operator-68c6474976-574ct\" (UID: \"1019f382-13cb-47d6-ae1f-f4ea54bd3008\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.548960 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/538365c3-9ba5-4fbd-aca1-05525f5a3250-secret-volume\") pod \"collect-profiles-29323410-c7zck\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.551172 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.570976 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.579069 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp"] Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.591461 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.603129 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d3648e0c-583a-44a3-8b06-3d1b1ae1491b-signing-key\") pod \"service-ca-9c57cc56f-7v5h8\" (UID: \"d3648e0c-583a-44a3-8b06-3d1b1ae1491b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.614571 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vdrxk"] Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.614795 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.631528 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.638471 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d3648e0c-583a-44a3-8b06-3d1b1ae1491b-signing-cabundle\") pod \"service-ca-9c57cc56f-7v5h8\" (UID: \"d3648e0c-583a-44a3-8b06-3d1b1ae1491b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" Oct 02 11:30:34 crc kubenswrapper[4725]: W1002 11:30:34.644704 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac6f43e5_03b3_49a8_9e46_7c607c06f40c.slice/crio-963a77b58d99287be33f2c8adece1d8f2962ee54f97fc0a609f8137528348605 WatchSource:0}: Error finding container 963a77b58d99287be33f2c8adece1d8f2962ee54f97fc0a609f8137528348605: Status 404 returned error can't find the container with id 963a77b58d99287be33f2c8adece1d8f2962ee54f97fc0a609f8137528348605 Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.650792 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.660082 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1019f382-13cb-47d6-ae1f-f4ea54bd3008-srv-cert\") pod \"catalog-operator-68c6474976-574ct\" (UID: \"1019f382-13cb-47d6-ae1f-f4ea54bd3008\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.671196 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.691922 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.702350 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/368d8e72-b80a-4336-b454-a73ea5f9a858-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nkzc6\" (UID: \"368d8e72-b80a-4336-b454-a73ea5f9a858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.710096 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.731275 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.736353 4725 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.736467 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-default-certificate podName:f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:35.236432376 +0000 UTC m=+155.143931839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-default-certificate") pod "router-default-5444994796-dxw2d" (UID: "f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732") : failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.736488 4725 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.736579 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41437658-6209-4ba0-8707-dc873b07d0f3-serving-cert podName:41437658-6209-4ba0-8707-dc873b07d0f3 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:35.236552519 +0000 UTC m=+155.144051982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41437658-6209-4ba0-8707-dc873b07d0f3-serving-cert") pod "service-ca-operator-777779d784-bstp2" (UID: "41437658-6209-4ba0-8707-dc873b07d0f3") : failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.736636 4725 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.736786 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-apiservice-cert podName:0d430bf0-eef2-4ec2-9941-57c5005e5931 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:35.236760475 +0000 UTC m=+155.144259948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-apiservice-cert") pod "packageserver-d55dfcdfc-sgktv" (UID: "0d430bf0-eef2-4ec2-9941-57c5005e5931") : failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.736844 4725 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.736986 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-webhook-cert podName:0d430bf0-eef2-4ec2-9941-57c5005e5931 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:35.23695133 +0000 UTC m=+155.144450973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-webhook-cert") pod "packageserver-d55dfcdfc-sgktv" (UID: "0d430bf0-eef2-4ec2-9941-57c5005e5931") : failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.737445 4725 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.737499 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-srv-cert podName:f3ff8f8f-8453-4b51-83fa-9aeda104fbff nodeName:}" failed. No retries permitted until 2025-10-02 11:30:35.237487734 +0000 UTC m=+155.144987197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-srv-cert") pod "olm-operator-6b444d44fb-l6fgp" (UID: "f3ff8f8f-8453-4b51-83fa-9aeda104fbff") : failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.737811 4725 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.737860 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-stats-auth podName:f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:35.237851935 +0000 UTC m=+155.145351398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-stats-auth") pod "router-default-5444994796-dxw2d" (UID: "f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732") : failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.737888 4725 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.737902 4725 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.737925 4725 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.737934 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-metrics-certs podName:f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:35.237925877 +0000 UTC m=+155.145425340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-metrics-certs") pod "router-default-5444994796-dxw2d" (UID: "f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732") : failed to sync secret cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.737993 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-service-ca-bundle podName:f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:35.237980058 +0000 UTC m=+155.145479531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-service-ca-bundle") pod "router-default-5444994796-dxw2d" (UID: "f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732") : failed to sync configmap cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: E1002 11:30:34.738010 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41437658-6209-4ba0-8707-dc873b07d0f3-config podName:41437658-6209-4ba0-8707-dc873b07d0f3 nodeName:}" failed. No retries permitted until 2025-10-02 11:30:35.238001779 +0000 UTC m=+155.145501252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41437658-6209-4ba0-8707-dc873b07d0f3-config") pod "service-ca-operator-777779d784-bstp2" (UID: "41437658-6209-4ba0-8707-dc873b07d0f3") : failed to sync configmap cache: timed out waiting for the condition Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.751450 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.770422 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.792254 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.810442 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.831518 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.850824 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.872009 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.890401 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.911529 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.930455 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.950910 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.971391 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 02 11:30:34 crc kubenswrapper[4725]: I1002 11:30:34.991131 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.011440 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.030585 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.050966 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.070669 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.090995 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.111563 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.131084 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.152142 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.171691 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.193371 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.211577 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.230825 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.251901 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.261442 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-metrics-certs\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.261509 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-service-ca-bundle\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.261546 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-stats-auth\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.261700 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-default-certificate\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.261975 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41437658-6209-4ba0-8707-dc873b07d0f3-serving-cert\") pod \"service-ca-operator-777779d784-bstp2\" (UID: \"41437658-6209-4ba0-8707-dc873b07d0f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.262112 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-apiservice-cert\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.262168 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-webhook-cert\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.262334 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-srv-cert\") pod \"olm-operator-6b444d44fb-l6fgp\" (UID: \"f3ff8f8f-8453-4b51-83fa-9aeda104fbff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.262474 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41437658-6209-4ba0-8707-dc873b07d0f3-config\") pod \"service-ca-operator-777779d784-bstp2\" (UID: \"41437658-6209-4ba0-8707-dc873b07d0f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.272230 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.292620 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.311242 4725 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.331602 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.351089 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.363406 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" event={"ID":"958f4455-ed96-4896-b03c-dec837e33311","Type":"ContainerStarted","Data":"e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be"} Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.363466 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" event={"ID":"958f4455-ed96-4896-b03c-dec837e33311","Type":"ContainerStarted","Data":"8af9b9b48b51e10fe1cd25dd7de2ac4e6efdc4d4730cc495f179f9188ffcb539"} Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.364992 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" event={"ID":"74728246-d368-49cd-b41d-127de5ef0e1b","Type":"ContainerStarted","Data":"1c9aca2f2716a3735519dfed272890df74c2a1bed32d691f7e99c9dacbc929c7"} Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.366212 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vdrxk" event={"ID":"ac6f43e5-03b3-49a8-9e46-7c607c06f40c","Type":"ContainerStarted","Data":"64cf61de8f95b50746e918edefad937f8218575439ab21eec3d730c9cfb46e66"} Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.366254 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vdrxk" event={"ID":"ac6f43e5-03b3-49a8-9e46-7c607c06f40c","Type":"ContainerStarted","Data":"963a77b58d99287be33f2c8adece1d8f2962ee54f97fc0a609f8137528348605"} Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.366949 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" event={"ID":"d12f8c4f-905c-419a-bbfe-9ccda55d9b02","Type":"ContainerStarted","Data":"c37a7a2ec0600b9e573825811ebca91d50c198bc979e5ba486e2b7f37b3a9959"} Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.368003 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" event={"ID":"1968b2cb-6f43-43b1-a204-f99594ea8a1b","Type":"ContainerStarted","Data":"ccf9b2bc5ff263fd9d8b1a74ebe0ca239491f83fa5d8c86a8051528ef0f98735"} Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.370709 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.380277 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41437658-6209-4ba0-8707-dc873b07d0f3-config\") pod \"service-ca-operator-777779d784-bstp2\" (UID: \"41437658-6209-4ba0-8707-dc873b07d0f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.380343 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-service-ca-bundle\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.383235 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-metrics-certs\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.383287 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-apiservice-cert\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.383823 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d430bf0-eef2-4ec2-9941-57c5005e5931-webhook-cert\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.387319 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-default-certificate\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.387319 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-stats-auth\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.389199 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41437658-6209-4ba0-8707-dc873b07d0f3-serving-cert\") pod \"service-ca-operator-777779d784-bstp2\" (UID: \"41437658-6209-4ba0-8707-dc873b07d0f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.389350 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-srv-cert\") pod \"olm-operator-6b444d44fb-l6fgp\" (UID: \"f3ff8f8f-8453-4b51-83fa-9aeda104fbff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.392085 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.410412 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.446847 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jqx5\" (UniqueName: \"kubernetes.io/projected/d1cce09c-0bfa-4ca8-ae80-b854d69be12e-kube-api-access-5jqx5\") pod \"authentication-operator-69f744f599-7mdf5\" (UID: \"d1cce09c-0bfa-4ca8-ae80-b854d69be12e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.467740 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/910e1016-708f-4940-9a30-c949c8e58b54-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.484933 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dfpc\" (UniqueName: \"kubernetes.io/projected/dc349021-6211-4cf4-9ec7-d50a5f9814bb-kube-api-access-5dfpc\") pod \"kube-storage-version-migrator-operator-b67b599dd-tg727\" (UID: \"dc349021-6211-4cf4-9ec7-d50a5f9814bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.489853 4725 request.go:700] Waited for 1.862233712s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.494425 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.506967 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgt86\" (UniqueName: \"kubernetes.io/projected/b854d5e6-ba97-4f14-be97-49c1b0151d93-kube-api-access-pgt86\") pod \"migrator-59844c95c7-8x9gx\" (UID: \"b854d5e6-ba97-4f14-be97-49c1b0151d93\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.519426 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.531983 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8fbda729-20cb-4a89-9295-da8fb53f7136-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.546008 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knmqd\" (UniqueName: \"kubernetes.io/projected/e5b9a719-adef-4213-b931-3f20d44b90b7-kube-api-access-knmqd\") pod \"dns-operator-744455d44c-vxnwq\" (UID: \"e5b9a719-adef-4213-b931-3f20d44b90b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.565509 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb49g\" (UniqueName: \"kubernetes.io/projected/e1135f15-46ec-4d45-8167-d810903ee497-kube-api-access-rb49g\") pod \"multus-admission-controller-857f4d67dd-smwg9\" (UID: \"e1135f15-46ec-4d45-8167-d810903ee497\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.588883 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z598k\" (UniqueName: \"kubernetes.io/projected/0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f-kube-api-access-z598k\") pod \"apiserver-76f77b778f-g6fnf\" (UID: \"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f\") " pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.607619 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ppgh\" (UniqueName: \"kubernetes.io/projected/0361af9a-1b83-4972-b03c-a718779bc05a-kube-api-access-5ppgh\") pod \"openshift-controller-manager-operator-756b6f6bc6-bm5qf\" (UID: \"0361af9a-1b83-4972-b03c-a718779bc05a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.618839 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.630246 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbm6b\" (UniqueName: \"kubernetes.io/projected/e96f50b0-1c58-4c09-8554-16c1104a7298-kube-api-access-jbm6b\") pod \"openshift-config-operator-7777fb866f-wgm9g\" (UID: \"e96f50b0-1c58-4c09-8554-16c1104a7298\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.642417 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.646548 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6jrw\" (UniqueName: \"kubernetes.io/projected/8fbda729-20cb-4a89-9295-da8fb53f7136-kube-api-access-r6jrw\") pod \"ingress-operator-5b745b69d9-fqgld\" (UID: \"8fbda729-20cb-4a89-9295-da8fb53f7136\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.664797 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p8r2\" (UniqueName: \"kubernetes.io/projected/b79b9e09-2453-4a58-af84-0732c5f7892d-kube-api-access-6p8r2\") pod \"oauth-openshift-558db77b4-x5npd\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.678514 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.679681 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727"] Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.687944 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cphh\" (UniqueName: \"kubernetes.io/projected/910e1016-708f-4940-9a30-c949c8e58b54-kube-api-access-8cphh\") pod \"cluster-image-registry-operator-dc59b4c8b-2flkj\" (UID: \"910e1016-708f-4940-9a30-c949c8e58b54\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.695427 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" Oct 02 11:30:35 crc kubenswrapper[4725]: W1002 11:30:35.696052 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc349021_6211_4cf4_9ec7_d50a5f9814bb.slice/crio-314c32db6b9eee88e48357ae6020f696eb4800d1aa239e6536542cbaf52abbb2 WatchSource:0}: Error finding container 314c32db6b9eee88e48357ae6020f696eb4800d1aa239e6536542cbaf52abbb2: Status 404 returned error can't find the container with id 314c32db6b9eee88e48357ae6020f696eb4800d1aa239e6536542cbaf52abbb2 Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.705079 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.720198 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sdhg\" (UniqueName: \"kubernetes.io/projected/a0940542-0c18-472b-8fe9-2363f88ec264-kube-api-access-2sdhg\") pod \"apiserver-7bbb656c7d-b5cg7\" (UID: \"a0940542-0c18-472b-8fe9-2363f88ec264\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.720511 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.736119 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bzlv\" (UniqueName: \"kubernetes.io/projected/37af674e-88c7-4b76-9a74-371e60757f7c-kube-api-access-6bzlv\") pod \"etcd-operator-b45778765-2sz4f\" (UID: \"37af674e-88c7-4b76-9a74-371e60757f7c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.737227 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx"] Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.752334 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzwp7\" (UniqueName: \"kubernetes.io/projected/01689a82-0e94-4995-a494-6c8bc2116e93-kube-api-access-nzwp7\") pod \"console-operator-58897d9998-p4dgg\" (UID: \"01689a82-0e94-4995-a494-6c8bc2116e93\") " pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.770230 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fc53fbf-3f2c-41c2-be09-43be29dc3865-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vwx75\" (UID: \"6fc53fbf-3f2c-41c2-be09-43be29dc3865\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.780886 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.787262 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.791474 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qldt\" (UniqueName: \"kubernetes.io/projected/c36f3900-450a-437f-9fda-b3c7ccf6b4be-kube-api-access-7qldt\") pod \"machine-api-operator-5694c8668f-ps6mz\" (UID: \"c36f3900-450a-437f-9fda-b3c7ccf6b4be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.806625 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.810407 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26jmc\" (UniqueName: \"kubernetes.io/projected/45a64c58-e326-4e39-a87b-94bf31b48c9d-kube-api-access-26jmc\") pod \"machine-config-operator-74547568cd-4554v\" (UID: \"45a64c58-e326-4e39-a87b-94bf31b48c9d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.812638 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.826446 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzw54\" (UniqueName: \"kubernetes.io/projected/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-kube-api-access-xzw54\") pod \"console-f9d7485db-kq4vt\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.853558 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8acc906c-1b87-4f44-b20e-c4ab1e8474a7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r647p\" (UID: \"8acc906c-1b87-4f44-b20e-c4ab1e8474a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.853652 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.855246 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf"] Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.889022 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg8d9\" (UniqueName: \"kubernetes.io/projected/825215a6-1ebc-426c-b54d-f54f6c261f55-kube-api-access-wg8d9\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hrs6\" (UID: \"825215a6-1ebc-426c-b54d-f54f6c261f55\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.909860 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftfz2\" (UniqueName: \"kubernetes.io/projected/41437658-6209-4ba0-8707-dc873b07d0f3-kube-api-access-ftfz2\") pod \"service-ca-operator-777779d784-bstp2\" (UID: \"41437658-6209-4ba0-8707-dc873b07d0f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.910424 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g6fnf"] Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.945409 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mhx4\" (UniqueName: \"kubernetes.io/projected/c1db9267-244a-40a4-ae74-8ce562f97a4c-kube-api-access-8mhx4\") pod \"machine-config-controller-84d6567774-vj4zt\" (UID: \"c1db9267-244a-40a4-ae74-8ce562f97a4c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.946611 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.957701 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhzmj\" (UniqueName: \"kubernetes.io/projected/9a84f459-c277-472f-a84c-328e8523f8e0-kube-api-access-mhzmj\") pod \"marketplace-operator-79b997595-m5sv8\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.959980 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" Oct 02 11:30:35 crc kubenswrapper[4725]: W1002 11:30:35.967610 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d4bbd99_d18d_48d4_aa5d_0da65f7edc2f.slice/crio-3f9ad5d7428d8d56b37a3b3244dc054824e5984f17aa5ca2efa4bd2fefcbe3d3 WatchSource:0}: Error finding container 3f9ad5d7428d8d56b37a3b3244dc054824e5984f17aa5ca2efa4bd2fefcbe3d3: Status 404 returned error can't find the container with id 3f9ad5d7428d8d56b37a3b3244dc054824e5984f17aa5ca2efa4bd2fefcbe3d3 Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.974816 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7abb266f-c1ef-43e3-9aef-213be819dc8e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9dnp4\" (UID: \"7abb266f-c1ef-43e3-9aef-213be819dc8e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.986590 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:35 crc kubenswrapper[4725]: I1002 11:30:35.995903 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgv4v\" (UniqueName: \"kubernetes.io/projected/538365c3-9ba5-4fbd-aca1-05525f5a3250-kube-api-access-bgv4v\") pod \"collect-profiles-29323410-c7zck\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.004196 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.011888 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.012416 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqlzm\" (UniqueName: \"kubernetes.io/projected/f3ff8f8f-8453-4b51-83fa-9aeda104fbff-kube-api-access-cqlzm\") pod \"olm-operator-6b444d44fb-l6fgp\" (UID: \"f3ff8f8f-8453-4b51-83fa-9aeda104fbff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.018631 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x5npd"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.028346 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jbg4\" (UniqueName: \"kubernetes.io/projected/0d430bf0-eef2-4ec2-9941-57c5005e5931-kube-api-access-4jbg4\") pod \"packageserver-d55dfcdfc-sgktv\" (UID: \"0d430bf0-eef2-4ec2-9941-57c5005e5931\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.030133 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" Oct 02 11:30:36 crc kubenswrapper[4725]: W1002 11:30:36.051098 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode96f50b0_1c58_4c09_8554_16c1104a7298.slice/crio-90f21b04b98a521c828932cf7fc8dc28675aebfbcd023721af085eb75bd45cf4 WatchSource:0}: Error finding container 90f21b04b98a521c828932cf7fc8dc28675aebfbcd023721af085eb75bd45cf4: Status 404 returned error can't find the container with id 90f21b04b98a521c828932cf7fc8dc28675aebfbcd023721af085eb75bd45cf4 Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.051641 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59759\" (UniqueName: \"kubernetes.io/projected/f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732-kube-api-access-59759\") pod \"router-default-5444994796-dxw2d\" (UID: \"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732\") " pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.077641 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdmjl\" (UniqueName: \"kubernetes.io/projected/d3648e0c-583a-44a3-8b06-3d1b1ae1491b-kube-api-access-qdmjl\") pod \"service-ca-9c57cc56f-7v5h8\" (UID: \"d3648e0c-583a-44a3-8b06-3d1b1ae1491b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.101492 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.116310 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gmkq\" (UniqueName: \"kubernetes.io/projected/1019f382-13cb-47d6-ae1f-f4ea54bd3008-kube-api-access-8gmkq\") pod \"catalog-operator-68c6474976-574ct\" (UID: \"1019f382-13cb-47d6-ae1f-f4ea54bd3008\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.126311 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.127078 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pjbp\" (UniqueName: \"kubernetes.io/projected/368d8e72-b80a-4336-b454-a73ea5f9a858-kube-api-access-4pjbp\") pod \"package-server-manager-789f6589d5-nkzc6\" (UID: \"368d8e72-b80a-4336-b454-a73ea5f9a858\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.134412 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.167922 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.167958 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.168540 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.168612 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.174414 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.180364 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ad9fc733-da08-4b08-a234-f85424ed53cb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.180420 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-certificates\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.180491 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-mountpoint-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.180847 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ad9fc733-da08-4b08-a234-f85424ed53cb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.180952 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-socket-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181036 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8342476-d06a-48c8-84de-89b1531728e1-serving-cert\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181089 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871a005b-f47d-4e0c-b704-95ada1fa0584-config-volume\") pod \"dns-default-fcfzl\" (UID: \"871a005b-f47d-4e0c-b704-95ada1fa0584\") " pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181126 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-csi-data-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181151 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-config\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181203 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-registration-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181332 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dae3a03e-7663-46b9-9638-21d1eedc2f86-node-bootstrap-token\") pod \"machine-config-server-lpd8k\" (UID: \"dae3a03e-7663-46b9-9638-21d1eedc2f86\") " pod="openshift-machine-config-operator/machine-config-server-lpd8k" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181384 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-bound-sa-token\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181410 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/871a005b-f47d-4e0c-b704-95ada1fa0584-metrics-tls\") pod \"dns-default-fcfzl\" (UID: \"871a005b-f47d-4e0c-b704-95ada1fa0584\") " pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181507 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-trusted-ca\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181534 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrkm\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-kube-api-access-tvrkm\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181557 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj4zz\" (UniqueName: \"kubernetes.io/projected/dae3a03e-7663-46b9-9638-21d1eedc2f86-kube-api-access-nj4zz\") pod \"machine-config-server-lpd8k\" (UID: \"dae3a03e-7663-46b9-9638-21d1eedc2f86\") " pod="openshift-machine-config-operator/machine-config-server-lpd8k" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181612 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-client-ca\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181701 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-plugins-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181897 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a3bc794-c04f-4e8d-9c4b-dd3502e017b3-cert\") pod \"ingress-canary-pzz9t\" (UID: \"3a3bc794-c04f-4e8d-9c4b-dd3502e017b3\") " pod="openshift-ingress-canary/ingress-canary-pzz9t" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.181994 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fpmj\" (UniqueName: \"kubernetes.io/projected/e6e25413-d1ac-4132-80bd-2119f36405cb-kube-api-access-2fpmj\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.182064 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.182254 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gk22\" (UniqueName: \"kubernetes.io/projected/871a005b-f47d-4e0c-b704-95ada1fa0584-kube-api-access-9gk22\") pod \"dns-default-fcfzl\" (UID: \"871a005b-f47d-4e0c-b704-95ada1fa0584\") " pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.182285 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nngwq\" (UniqueName: \"kubernetes.io/projected/3a3bc794-c04f-4e8d-9c4b-dd3502e017b3-kube-api-access-nngwq\") pod \"ingress-canary-pzz9t\" (UID: \"3a3bc794-c04f-4e8d-9c4b-dd3502e017b3\") " pod="openshift-ingress-canary/ingress-canary-pzz9t" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.182355 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm29n\" (UniqueName: \"kubernetes.io/projected/b8342476-d06a-48c8-84de-89b1531728e1-kube-api-access-wm29n\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.182447 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dae3a03e-7663-46b9-9638-21d1eedc2f86-certs\") pod \"machine-config-server-lpd8k\" (UID: \"dae3a03e-7663-46b9-9638-21d1eedc2f86\") " pod="openshift-machine-config-operator/machine-config-server-lpd8k" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.182566 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-tls\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.193138 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" Oct 02 11:30:36 crc kubenswrapper[4725]: E1002 11:30:36.194906 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:36.6948899 +0000 UTC m=+156.602389363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.196957 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.198444 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.203456 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.206939 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303098 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-tls\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303304 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ad9fc733-da08-4b08-a234-f85424ed53cb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303321 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-certificates\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303338 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-mountpoint-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303373 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ad9fc733-da08-4b08-a234-f85424ed53cb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303395 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-socket-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303414 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8342476-d06a-48c8-84de-89b1531728e1-serving-cert\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303432 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871a005b-f47d-4e0c-b704-95ada1fa0584-config-volume\") pod \"dns-default-fcfzl\" (UID: \"871a005b-f47d-4e0c-b704-95ada1fa0584\") " pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303446 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-csi-data-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303461 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-config\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303478 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-registration-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303496 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dae3a03e-7663-46b9-9638-21d1eedc2f86-node-bootstrap-token\") pod \"machine-config-server-lpd8k\" (UID: \"dae3a03e-7663-46b9-9638-21d1eedc2f86\") " pod="openshift-machine-config-operator/machine-config-server-lpd8k" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303513 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-bound-sa-token\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303533 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/871a005b-f47d-4e0c-b704-95ada1fa0584-metrics-tls\") pod \"dns-default-fcfzl\" (UID: \"871a005b-f47d-4e0c-b704-95ada1fa0584\") " pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303570 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-trusted-ca\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303587 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvrkm\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-kube-api-access-tvrkm\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303601 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj4zz\" (UniqueName: \"kubernetes.io/projected/dae3a03e-7663-46b9-9638-21d1eedc2f86-kube-api-access-nj4zz\") pod \"machine-config-server-lpd8k\" (UID: \"dae3a03e-7663-46b9-9638-21d1eedc2f86\") " pod="openshift-machine-config-operator/machine-config-server-lpd8k" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303615 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-client-ca\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303635 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-plugins-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303652 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a3bc794-c04f-4e8d-9c4b-dd3502e017b3-cert\") pod \"ingress-canary-pzz9t\" (UID: \"3a3bc794-c04f-4e8d-9c4b-dd3502e017b3\") " pod="openshift-ingress-canary/ingress-canary-pzz9t" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fpmj\" (UniqueName: \"kubernetes.io/projected/e6e25413-d1ac-4132-80bd-2119f36405cb-kube-api-access-2fpmj\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.303710 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gk22\" (UniqueName: \"kubernetes.io/projected/871a005b-f47d-4e0c-b704-95ada1fa0584-kube-api-access-9gk22\") pod \"dns-default-fcfzl\" (UID: \"871a005b-f47d-4e0c-b704-95ada1fa0584\") " pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:36 crc kubenswrapper[4725]: E1002 11:30:36.304608 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:36.804578049 +0000 UTC m=+156.712077512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.304628 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-registration-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.304799 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-socket-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.304821 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nngwq\" (UniqueName: \"kubernetes.io/projected/3a3bc794-c04f-4e8d-9c4b-dd3502e017b3-kube-api-access-nngwq\") pod \"ingress-canary-pzz9t\" (UID: \"3a3bc794-c04f-4e8d-9c4b-dd3502e017b3\") " pod="openshift-ingress-canary/ingress-canary-pzz9t" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.304871 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm29n\" (UniqueName: \"kubernetes.io/projected/b8342476-d06a-48c8-84de-89b1531728e1-kube-api-access-wm29n\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.304895 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dae3a03e-7663-46b9-9638-21d1eedc2f86-certs\") pod \"machine-config-server-lpd8k\" (UID: \"dae3a03e-7663-46b9-9638-21d1eedc2f86\") " pod="openshift-machine-config-operator/machine-config-server-lpd8k" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.305673 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-csi-data-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.306142 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871a005b-f47d-4e0c-b704-95ada1fa0584-config-volume\") pod \"dns-default-fcfzl\" (UID: \"871a005b-f47d-4e0c-b704-95ada1fa0584\") " pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.306605 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-config\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.307338 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-mountpoint-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.307654 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-client-ca\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.307751 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e6e25413-d1ac-4132-80bd-2119f36405cb-plugins-dir\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.310096 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8342476-d06a-48c8-84de-89b1531728e1-serving-cert\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.310306 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-tls\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.310406 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dae3a03e-7663-46b9-9638-21d1eedc2f86-certs\") pod \"machine-config-server-lpd8k\" (UID: \"dae3a03e-7663-46b9-9638-21d1eedc2f86\") " pod="openshift-machine-config-operator/machine-config-server-lpd8k" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.310461 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-certificates\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.310487 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ad9fc733-da08-4b08-a234-f85424ed53cb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.311098 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a3bc794-c04f-4e8d-9c4b-dd3502e017b3-cert\") pod \"ingress-canary-pzz9t\" (UID: \"3a3bc794-c04f-4e8d-9c4b-dd3502e017b3\") " pod="openshift-ingress-canary/ingress-canary-pzz9t" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.312443 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-trusted-ca\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.317884 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/871a005b-f47d-4e0c-b704-95ada1fa0584-metrics-tls\") pod \"dns-default-fcfzl\" (UID: \"871a005b-f47d-4e0c-b704-95ada1fa0584\") " pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.317994 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ad9fc733-da08-4b08-a234-f85424ed53cb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.318354 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dae3a03e-7663-46b9-9638-21d1eedc2f86-node-bootstrap-token\") pod \"machine-config-server-lpd8k\" (UID: \"dae3a03e-7663-46b9-9638-21d1eedc2f86\") " pod="openshift-machine-config-operator/machine-config-server-lpd8k" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.358950 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.362968 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vxnwq"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.368273 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj4zz\" (UniqueName: \"kubernetes.io/projected/dae3a03e-7663-46b9-9638-21d1eedc2f86-kube-api-access-nj4zz\") pod \"machine-config-server-lpd8k\" (UID: \"dae3a03e-7663-46b9-9638-21d1eedc2f86\") " pod="openshift-machine-config-operator/machine-config-server-lpd8k" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.372085 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fpmj\" (UniqueName: \"kubernetes.io/projected/e6e25413-d1ac-4132-80bd-2119f36405cb-kube-api-access-2fpmj\") pod \"csi-hostpathplugin-tmvx6\" (UID: \"e6e25413-d1ac-4132-80bd-2119f36405cb\") " pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.377584 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7mdf5"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.395612 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-bound-sa-token\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.400906 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-smwg9"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.402518 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" event={"ID":"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f","Type":"ContainerStarted","Data":"3f9ad5d7428d8d56b37a3b3244dc054824e5984f17aa5ca2efa4bd2fefcbe3d3"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.405987 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: E1002 11:30:36.406409 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:36.906393302 +0000 UTC m=+156.813892765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.407551 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" event={"ID":"74728246-d368-49cd-b41d-127de5ef0e1b","Type":"ContainerStarted","Data":"fbb925f1e4b984059f55ea4bd52f1e7f8572678fd870a00745386c27dca00ab9"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.412347 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvrkm\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-kube-api-access-tvrkm\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.430410 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" event={"ID":"dc349021-6211-4cf4-9ec7-d50a5f9814bb","Type":"ContainerStarted","Data":"522c21875494059339027b253cc6938063bd2328bee40505a7f573c382bdc8b7"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.430462 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" event={"ID":"dc349021-6211-4cf4-9ec7-d50a5f9814bb","Type":"ContainerStarted","Data":"314c32db6b9eee88e48357ae6020f696eb4800d1aa239e6536542cbaf52abbb2"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.434076 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nngwq\" (UniqueName: \"kubernetes.io/projected/3a3bc794-c04f-4e8d-9c4b-dd3502e017b3-kube-api-access-nngwq\") pod \"ingress-canary-pzz9t\" (UID: \"3a3bc794-c04f-4e8d-9c4b-dd3502e017b3\") " pod="openshift-ingress-canary/ingress-canary-pzz9t" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.441901 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" event={"ID":"e96f50b0-1c58-4c09-8554-16c1104a7298","Type":"ContainerStarted","Data":"90f21b04b98a521c828932cf7fc8dc28675aebfbcd023721af085eb75bd45cf4"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.450397 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm29n\" (UniqueName: \"kubernetes.io/projected/b8342476-d06a-48c8-84de-89b1531728e1-kube-api-access-wm29n\") pod \"route-controller-manager-6576b87f9c-6gs7l\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.459455 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" event={"ID":"0361af9a-1b83-4972-b03c-a718779bc05a","Type":"ContainerStarted","Data":"f0165f8aa45c7758fd90991ce8343ff0a5ac907aa7633b1fdcf95681d2d59f5a"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.459499 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" event={"ID":"0361af9a-1b83-4972-b03c-a718779bc05a","Type":"ContainerStarted","Data":"f6ac767f4df4f663e1bac5dd74d26e014ac5157d6bf412b99d99900cc7c8b51e"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.462939 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx" event={"ID":"b854d5e6-ba97-4f14-be97-49c1b0151d93","Type":"ContainerStarted","Data":"1a652f0789a2bc2b2b972bd537f78ac164144563722da4c5ce7d005116616211"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.462969 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx" event={"ID":"b854d5e6-ba97-4f14-be97-49c1b0151d93","Type":"ContainerStarted","Data":"2546cfc88cce61f897473843003cc1884a6254fbc2df5b2931435e8e68417fcb"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.486548 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gk22\" (UniqueName: \"kubernetes.io/projected/871a005b-f47d-4e0c-b704-95ada1fa0584-kube-api-access-9gk22\") pod \"dns-default-fcfzl\" (UID: \"871a005b-f47d-4e0c-b704-95ada1fa0584\") " pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.492341 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" event={"ID":"d12f8c4f-905c-419a-bbfe-9ccda55d9b02","Type":"ContainerStarted","Data":"a3212681d025f76658361590bbc6a574c532656db11b69604a8739b428532c69"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.492405 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" event={"ID":"d12f8c4f-905c-419a-bbfe-9ccda55d9b02","Type":"ContainerStarted","Data":"fc359720fd1157ffa07e19d508f8c38572dcf718bafa210c70239ac933499bd5"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.496832 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" event={"ID":"b79b9e09-2453-4a58-af84-0732c5f7892d","Type":"ContainerStarted","Data":"94eebfc125067f4f04bcd1f162adba0683f033de7c9831bd917728351d94f8d0"} Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.496895 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-vdrxk" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.497876 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.500288 4725 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-m285b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.500362 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" podUID="958f4455-ed96-4896-b03c-dec837e33311" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.500451 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-vdrxk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.500481 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vdrxk" podUID="ac6f43e5-03b3-49a8-9e46-7c607c06f40c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Oct 02 11:30:36 crc kubenswrapper[4725]: W1002 11:30:36.504553 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod910e1016_708f_4940_9a30_c949c8e58b54.slice/crio-036ebfbd60fdc8d51c11cb8860b42747203e52be9f2e318a2e2fdd9fc0d7b43d WatchSource:0}: Error finding container 036ebfbd60fdc8d51c11cb8860b42747203e52be9f2e318a2e2fdd9fc0d7b43d: Status 404 returned error can't find the container with id 036ebfbd60fdc8d51c11cb8860b42747203e52be9f2e318a2e2fdd9fc0d7b43d Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.509735 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:36 crc kubenswrapper[4725]: E1002 11:30:36.512391 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.012368839 +0000 UTC m=+156.919868302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.513583 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.520941 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75"] Oct 02 11:30:36 crc kubenswrapper[4725]: W1002 11:30:36.525688 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5b9a719_adef_4213_b931_3f20d44b90b7.slice/crio-8459426fe6ac50387b3e6a9314efb439eddb4c147494ecf86f8f98de97fa1db2 WatchSource:0}: Error finding container 8459426fe6ac50387b3e6a9314efb439eddb4c147494ecf86f8f98de97fa1db2: Status 404 returned error can't find the container with id 8459426fe6ac50387b3e6a9314efb439eddb4c147494ecf86f8f98de97fa1db2 Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.529706 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lpd8k" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.537077 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.537577 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.539303 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p4dgg"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.553579 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.568771 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pzz9t" Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.616598 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: E1002 11:30:36.616889 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.116869065 +0000 UTC m=+157.024368518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.667389 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4554v"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.680874 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kq4vt"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.692264 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2sz4f"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.698215 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.712625 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p"] Oct 02 11:30:36 crc kubenswrapper[4725]: W1002 11:30:36.713083 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fbda729_20cb_4a89_9295_da8fb53f7136.slice/crio-3f6fb032f0a23d37fbd11f2d0156683299fe6db1e2736c2cdcc38ab824dcdf25 WatchSource:0}: Error finding container 3f6fb032f0a23d37fbd11f2d0156683299fe6db1e2736c2cdcc38ab824dcdf25: Status 404 returned error can't find the container with id 3f6fb032f0a23d37fbd11f2d0156683299fe6db1e2736c2cdcc38ab824dcdf25 Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.714852 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ps6mz"] Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.720444 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:36 crc kubenswrapper[4725]: E1002 11:30:36.721133 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.221117294 +0000 UTC m=+157.128616767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:36 crc kubenswrapper[4725]: W1002 11:30:36.760026 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37af674e_88c7_4b76_9a74_371e60757f7c.slice/crio-d10889ec600d776d11635e59aea949ddcd9b1025e878e57065b8f709ef99bf9b WatchSource:0}: Error finding container d10889ec600d776d11635e59aea949ddcd9b1025e878e57065b8f709ef99bf9b: Status 404 returned error can't find the container with id d10889ec600d776d11635e59aea949ddcd9b1025e878e57065b8f709ef99bf9b Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.821883 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: E1002 11:30:36.822150 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.322139045 +0000 UTC m=+157.229638508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:36 crc kubenswrapper[4725]: W1002 11:30:36.856277 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7e4bee1_a3eb_4e37_bfcb_99350ce66859.slice/crio-db7345580fdea77b25de58168184bfd26410b194ce8eee5367d1649aac91850c WatchSource:0}: Error finding container db7345580fdea77b25de58168184bfd26410b194ce8eee5367d1649aac91850c: Status 404 returned error can't find the container with id db7345580fdea77b25de58168184bfd26410b194ce8eee5367d1649aac91850c Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.923848 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:36 crc kubenswrapper[4725]: E1002 11:30:36.924072 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.424044551 +0000 UTC m=+157.331544014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:36 crc kubenswrapper[4725]: I1002 11:30:36.925086 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:36 crc kubenswrapper[4725]: E1002 11:30:36.925412 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.425393938 +0000 UTC m=+157.332893451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.026127 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.026295 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.526264385 +0000 UTC m=+157.433763848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.026448 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.026825 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.526810131 +0000 UTC m=+157.434309594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.067682 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.076968 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7v5h8"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.079399 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.088685 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.129667 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.131498 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.631475601 +0000 UTC m=+157.538975064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.161468 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5sv8"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.167211 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.186442 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tmvx6"] Oct 02 11:30:37 crc kubenswrapper[4725]: W1002 11:30:37.229605 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3648e0c_583a_44a3_8b06_3d1b1ae1491b.slice/crio-d458925002ec32675740bce73d2de0333b8c883bf0b7c96b4fcc34cec43f4772 WatchSource:0}: Error finding container d458925002ec32675740bce73d2de0333b8c883bf0b7c96b4fcc34cec43f4772: Status 404 returned error can't find the container with id d458925002ec32675740bce73d2de0333b8c883bf0b7c96b4fcc34cec43f4772 Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.232577 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.232617 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.232656 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.232709 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.232755 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.233954 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.733942421 +0000 UTC m=+157.641441884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.241613 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.242248 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.242369 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.243517 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.243600 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.264081 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bstp2"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.285169 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.291668 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.298778 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.298816 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.298827 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.307926 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.315466 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pzz9t"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.333645 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.333813 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.833788941 +0000 UTC m=+157.741288394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.333904 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.334278 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.834266244 +0000 UTC m=+157.741765707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: W1002 11:30:37.338707 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a84f459_c277_472f_a84c_328e8523f8e0.slice/crio-783b9681f21f76e7245dec8f5f4340292f98ee23eeda38d3ae42d83a46d047f7 WatchSource:0}: Error finding container 783b9681f21f76e7245dec8f5f4340292f98ee23eeda38d3ae42d83a46d047f7: Status 404 returned error can't find the container with id 783b9681f21f76e7245dec8f5f4340292f98ee23eeda38d3ae42d83a46d047f7 Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.386540 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fcfzl"] Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.441484 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.443265 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:37.943249533 +0000 UTC m=+157.850748996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: W1002 11:30:37.445990 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368d8e72_b80a_4336_b454_a73ea5f9a858.slice/crio-a29300958d628c67b4342fec1938a81865fff03b2f01104d934bf8b1fd85b790 WatchSource:0}: Error finding container a29300958d628c67b4342fec1938a81865fff03b2f01104d934bf8b1fd85b790: Status 404 returned error can't find the container with id a29300958d628c67b4342fec1938a81865fff03b2f01104d934bf8b1fd85b790 Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.489579 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.522436 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dmfx8" podStartSLOduration=127.522416188 podStartE2EDuration="2m7.522416188s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:37.521024979 +0000 UTC m=+157.428524442" watchObservedRunningTime="2025-10-02 11:30:37.522416188 +0000 UTC m=+157.429915651" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.526357 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" event={"ID":"8acc906c-1b87-4f44-b20e-c4ab1e8474a7","Type":"ContainerStarted","Data":"00b6b123cee59b938f26296b92ac70ce8f4effd40a6cb1aa559fa942e4325127"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.537402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" event={"ID":"b79b9e09-2453-4a58-af84-0732c5f7892d","Type":"ContainerStarted","Data":"7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.537468 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.542167 4725 generic.go:334] "Generic (PLEG): container finished" podID="0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f" containerID="7a1c3ee28ae8e10dc84db03a2a0e5f7e8e8807c59cd6f5a351cbbb421ca85bb7" exitCode=0 Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.542244 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" event={"ID":"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f","Type":"ContainerDied","Data":"7a1c3ee28ae8e10dc84db03a2a0e5f7e8e8807c59cd6f5a351cbbb421ca85bb7"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.543774 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.544126 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:38.04411395 +0000 UTC m=+157.951613413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.544588 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" event={"ID":"a0940542-0c18-472b-8fe9-2363f88ec264","Type":"ContainerStarted","Data":"00907724cdc1285932d04627eebb63f29a6b5b91c6849e68a88615ee87a79fef"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.547665 4725 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x5npd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.547732 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" podUID="b79b9e09-2453-4a58-af84-0732c5f7892d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.548218 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" event={"ID":"e6e25413-d1ac-4132-80bd-2119f36405cb","Type":"ContainerStarted","Data":"fe043a4757e2be0633c4a2ee678f4e52f848f830350bbd038f20737411cd4a05"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.591934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" event={"ID":"45a64c58-e326-4e39-a87b-94bf31b48c9d","Type":"ContainerStarted","Data":"0003b460aa229ef94d11742c1fdf294dc3a6a2afe647d82811ce349459d2024c"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.594680 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" event={"ID":"7abb266f-c1ef-43e3-9aef-213be819dc8e","Type":"ContainerStarted","Data":"899912244a9bbcf3836ac0fc6f19ac676929a356d243ad7ebe94058fad6cb9d1"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.606953 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kq4vt" event={"ID":"a7e4bee1-a3eb-4e37-bfcb-99350ce66859","Type":"ContainerStarted","Data":"db7345580fdea77b25de58168184bfd26410b194ce8eee5367d1649aac91850c"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.613080 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" event={"ID":"1019f382-13cb-47d6-ae1f-f4ea54bd3008","Type":"ContainerStarted","Data":"26d55a521076fc70e8d3138726aa19fe2e6273b34a43c9148029c7b21307540f"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.615570 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" event={"ID":"d3648e0c-583a-44a3-8b06-3d1b1ae1491b","Type":"ContainerStarted","Data":"d458925002ec32675740bce73d2de0333b8c883bf0b7c96b4fcc34cec43f4772"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.616687 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" event={"ID":"e1135f15-46ec-4d45-8167-d810903ee497","Type":"ContainerStarted","Data":"1b4ef98b1d5de4b64b941fd2b6d4a5929b200b0ffb68796222e35183c4b003eb"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.621021 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" event={"ID":"910e1016-708f-4940-9a30-c949c8e58b54","Type":"ContainerStarted","Data":"f940bea1094864c0586136750a368af4bd13c79d48ac50d22d010bcb20739678"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.621189 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" event={"ID":"910e1016-708f-4940-9a30-c949c8e58b54","Type":"ContainerStarted","Data":"036ebfbd60fdc8d51c11cb8860b42747203e52be9f2e318a2e2fdd9fc0d7b43d"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.624402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" event={"ID":"f3ff8f8f-8453-4b51-83fa-9aeda104fbff","Type":"ContainerStarted","Data":"53bd3e6076bd8c480995160c9ed225ef41f0d46af31b97f49ca950f802c538aa"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.637277 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" event={"ID":"9a84f459-c277-472f-a84c-328e8523f8e0","Type":"ContainerStarted","Data":"783b9681f21f76e7245dec8f5f4340292f98ee23eeda38d3ae42d83a46d047f7"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.642036 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" event={"ID":"6fc53fbf-3f2c-41c2-be09-43be29dc3865","Type":"ContainerStarted","Data":"91b0c5cbb9ac7ced1efe9fd1099fab0d66f3cc1155339b34275ed4c5ba0377d8"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.642083 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" event={"ID":"6fc53fbf-3f2c-41c2-be09-43be29dc3865","Type":"ContainerStarted","Data":"3e32e56c49dd4f27ec9d7a2f6f67381839be6e91e72ca2e3da586aee65c53306"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.650799 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.652329 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:38.152313377 +0000 UTC m=+158.059812840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.670207 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" event={"ID":"538365c3-9ba5-4fbd-aca1-05525f5a3250","Type":"ContainerStarted","Data":"1ca929e006691a6f77f5e3f0a5535fce03a97d4f69e2aa678572f87e44a6683f"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.677484 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6" event={"ID":"825215a6-1ebc-426c-b54d-f54f6c261f55","Type":"ContainerStarted","Data":"ba647c0c8954db54207d0d754941f50caa05aeb85c6b09f9b48245e16e800660"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.681945 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx" event={"ID":"b854d5e6-ba97-4f14-be97-49c1b0151d93","Type":"ContainerStarted","Data":"9f735c7c761b8d6e4965cd669688d27aabc4163b1c566b3c9fdfe012f0d3f0b3"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.715414 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" event={"ID":"c1db9267-244a-40a4-ae74-8ce562f97a4c","Type":"ContainerStarted","Data":"788e136098d57acd473c6487ada595bdd20862290d3e33f69fd0715ec81669e4"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.717306 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" event={"ID":"37af674e-88c7-4b76-9a74-371e60757f7c","Type":"ContainerStarted","Data":"d10889ec600d776d11635e59aea949ddcd9b1025e878e57065b8f709ef99bf9b"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.726756 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" event={"ID":"d1cce09c-0bfa-4ca8-ae80-b854d69be12e","Type":"ContainerStarted","Data":"0966f6bbb356cf11b08a4559727a5776dbe8fb472d9518c5a55b0e0b16e6a60a"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.726798 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" event={"ID":"d1cce09c-0bfa-4ca8-ae80-b854d69be12e","Type":"ContainerStarted","Data":"906a2cc5b37c07e016d84597af72362daeec1ee19e23fba575aa39310b8151b7"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.737506 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" event={"ID":"8fbda729-20cb-4a89-9295-da8fb53f7136","Type":"ContainerStarted","Data":"c2329ed927332532eda8063ac0e6142cb8524b1a6aa95f65207e99d7608b24a8"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.737547 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" event={"ID":"8fbda729-20cb-4a89-9295-da8fb53f7136","Type":"ContainerStarted","Data":"3f6fb032f0a23d37fbd11f2d0156683299fe6db1e2736c2cdcc38ab824dcdf25"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.744701 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dxw2d" event={"ID":"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732","Type":"ContainerStarted","Data":"cd2c406e4da11c635149321b8306404498ce7669f866361016412c0f13a706ca"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.745734 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" event={"ID":"41437658-6209-4ba0-8707-dc873b07d0f3","Type":"ContainerStarted","Data":"eacb571aa04843f57a64d9a3d65c15607fdb1fbb2dcb780047dc8fe31e869344"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.753171 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.755001 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:38.254990004 +0000 UTC m=+158.162489467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.770579 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pzz9t" event={"ID":"3a3bc794-c04f-4e8d-9c4b-dd3502e017b3","Type":"ContainerStarted","Data":"36b455cde4ed925af9519c39076ba9e30d9cefac60c134824582d5f15f8f0754"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.781338 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" event={"ID":"368d8e72-b80a-4336-b454-a73ea5f9a858","Type":"ContainerStarted","Data":"a29300958d628c67b4342fec1938a81865fff03b2f01104d934bf8b1fd85b790"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.816206 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" event={"ID":"c36f3900-450a-437f-9fda-b3c7ccf6b4be","Type":"ContainerStarted","Data":"4288b604fadf8471c167de4d1bd16a01ce67c89605f944fb0ce4b1420b0da0a7"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.822344 4725 generic.go:334] "Generic (PLEG): container finished" podID="e96f50b0-1c58-4c09-8554-16c1104a7298" containerID="0b603a72c48688c8a658d83ac8ad99ee263bd9df8830e02e00792ca92b6cb962" exitCode=0 Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.822406 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" event={"ID":"e96f50b0-1c58-4c09-8554-16c1104a7298","Type":"ContainerDied","Data":"0b603a72c48688c8a658d83ac8ad99ee263bd9df8830e02e00792ca92b6cb962"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.833227 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lpd8k" event={"ID":"dae3a03e-7663-46b9-9638-21d1eedc2f86","Type":"ContainerStarted","Data":"75716c8fc9d8a158cff197bb7c002772604a4de9b9d711d7da54dd1ede222473"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.835878 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-p4dgg" event={"ID":"01689a82-0e94-4995-a494-6c8bc2116e93","Type":"ContainerStarted","Data":"1046882737d608eb6e8e5236afcd68d66a11db0933dc75b8f1082caf5e0a076c"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.861308 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.862490 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:38.362472182 +0000 UTC m=+158.269971645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.862864 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" event={"ID":"e5b9a719-adef-4213-b931-3f20d44b90b7","Type":"ContainerStarted","Data":"8459426fe6ac50387b3e6a9314efb439eddb4c147494ecf86f8f98de97fa1db2"} Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.862922 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-vdrxk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.862967 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vdrxk" podUID="ac6f43e5-03b3-49a8-9e46-7c607c06f40c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.897628 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.922033 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jvcbp" podStartSLOduration=128.92201736 podStartE2EDuration="2m8.92201736s" podCreationTimestamp="2025-10-02 11:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:37.921449335 +0000 UTC m=+157.828948818" watchObservedRunningTime="2025-10-02 11:30:37.92201736 +0000 UTC m=+157.829516823" Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.965472 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:37 crc kubenswrapper[4725]: E1002 11:30:37.970548 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:38.470533416 +0000 UTC m=+158.378032879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:37 crc kubenswrapper[4725]: I1002 11:30:37.972868 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-vdrxk" podStartSLOduration=127.972847339 podStartE2EDuration="2m7.972847339s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:37.964612473 +0000 UTC m=+157.872111936" watchObservedRunningTime="2025-10-02 11:30:37.972847339 +0000 UTC m=+157.880346812" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.043022 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2flkj" podStartSLOduration=128.042998466 podStartE2EDuration="2m8.042998466s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:38.039163462 +0000 UTC m=+157.946662925" watchObservedRunningTime="2025-10-02 11:30:38.042998466 +0000 UTC m=+157.950497929" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.068030 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:38 crc kubenswrapper[4725]: E1002 11:30:38.068436 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:38.568417821 +0000 UTC m=+158.475917284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.091953 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vwx75" podStartSLOduration=128.091934554 podStartE2EDuration="2m8.091934554s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:38.087133783 +0000 UTC m=+157.994633256" watchObservedRunningTime="2025-10-02 11:30:38.091934554 +0000 UTC m=+157.999434017" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.168793 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t55dx" podStartSLOduration=129.168772944 podStartE2EDuration="2m9.168772944s" podCreationTimestamp="2025-10-02 11:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:38.124254177 +0000 UTC m=+158.031753640" watchObservedRunningTime="2025-10-02 11:30:38.168772944 +0000 UTC m=+158.076272407" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.177565 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:38 crc kubenswrapper[4725]: E1002 11:30:38.178475 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:38.678455009 +0000 UTC m=+158.585954472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.180405 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" podStartSLOduration=128.180374071 podStartE2EDuration="2m8.180374071s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:38.176202907 +0000 UTC m=+158.083702370" watchObservedRunningTime="2025-10-02 11:30:38.180374071 +0000 UTC m=+158.087873534" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.209963 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" podStartSLOduration=128.20994262 podStartE2EDuration="2m8.20994262s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:38.209102897 +0000 UTC m=+158.116602360" watchObservedRunningTime="2025-10-02 11:30:38.20994262 +0000 UTC m=+158.117442083" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.279615 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:38 crc kubenswrapper[4725]: E1002 11:30:38.280065 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:38.780049666 +0000 UTC m=+158.687549129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.308999 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bm5qf" podStartSLOduration=128.308976856 podStartE2EDuration="2m8.308976856s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:38.308589806 +0000 UTC m=+158.216089269" watchObservedRunningTime="2025-10-02 11:30:38.308976856 +0000 UTC m=+158.216476329" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.311121 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8x9gx" podStartSLOduration=128.311103545 podStartE2EDuration="2m8.311103545s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:38.274942346 +0000 UTC m=+158.182441809" watchObservedRunningTime="2025-10-02 11:30:38.311103545 +0000 UTC m=+158.218603008" Oct 02 11:30:38 crc kubenswrapper[4725]: W1002 11:30:38.359877 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-71af1c39c4aea8eecaec2e96ea4b920195252ad7f0229adee8f23f734a9e3f0d WatchSource:0}: Error finding container 71af1c39c4aea8eecaec2e96ea4b920195252ad7f0229adee8f23f734a9e3f0d: Status 404 returned error can't find the container with id 71af1c39c4aea8eecaec2e96ea4b920195252ad7f0229adee8f23f734a9e3f0d Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.382168 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:38 crc kubenswrapper[4725]: E1002 11:30:38.382490 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:38.882476896 +0000 UTC m=+158.789976359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.430189 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tg727" podStartSLOduration=128.43017503 podStartE2EDuration="2m8.43017503s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:38.382920948 +0000 UTC m=+158.290420411" watchObservedRunningTime="2025-10-02 11:30:38.43017503 +0000 UTC m=+158.337674493" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.430387 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-7mdf5" podStartSLOduration=128.430383025 podStartE2EDuration="2m8.430383025s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:38.429944733 +0000 UTC m=+158.337444196" watchObservedRunningTime="2025-10-02 11:30:38.430383025 +0000 UTC m=+158.337882488" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.483623 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:38 crc kubenswrapper[4725]: E1002 11:30:38.484146 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:38.984131504 +0000 UTC m=+158.891630967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.588091 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:38 crc kubenswrapper[4725]: E1002 11:30:38.588497 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:39.088481857 +0000 UTC m=+158.995981320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.688774 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:38 crc kubenswrapper[4725]: E1002 11:30:38.689761 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:39.189740884 +0000 UTC m=+159.097240347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.791503 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:38 crc kubenswrapper[4725]: E1002 11:30:38.792615 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:39.292546315 +0000 UTC m=+159.200045778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.881996 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-p4dgg" event={"ID":"01689a82-0e94-4995-a494-6c8bc2116e93","Type":"ContainerStarted","Data":"a268440d968dc5040031a22c86f8c30f52d6e8b58c7fc6bfbf36b40ad4a64078"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.882777 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.892871 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:38 crc kubenswrapper[4725]: E1002 11:30:38.893295 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:39.393275838 +0000 UTC m=+159.300775301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.894102 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" event={"ID":"c36f3900-450a-437f-9fda-b3c7ccf6b4be","Type":"ContainerStarted","Data":"de2568bfff0c38f0d2eacd2e345e8859a5d4cef3b79ec00126316e197f7ccc6f"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.895384 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"71af1c39c4aea8eecaec2e96ea4b920195252ad7f0229adee8f23f734a9e3f0d"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.899162 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kq4vt" event={"ID":"a7e4bee1-a3eb-4e37-bfcb-99350ce66859","Type":"ContainerStarted","Data":"90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.900822 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" event={"ID":"8acc906c-1b87-4f44-b20e-c4ab1e8474a7","Type":"ContainerStarted","Data":"a8f7377e8dedf02d1e1503152521c0ac4b8ed14d98cef963d99fe298cbd00f48"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.904176 4725 patch_prober.go:28] interesting pod/console-operator-58897d9998-p4dgg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.904222 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-p4dgg" podUID="01689a82-0e94-4995-a494-6c8bc2116e93" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.905684 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" event={"ID":"e1135f15-46ec-4d45-8167-d810903ee497","Type":"ContainerStarted","Data":"2aa4a4de89a62a79db99422254fe3d224de7b2c558acd7e587b7a4028dc40936"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.911095 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-p4dgg" podStartSLOduration=128.911077814 podStartE2EDuration="2m8.911077814s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:38.905240945 +0000 UTC m=+158.812740398" watchObservedRunningTime="2025-10-02 11:30:38.911077814 +0000 UTC m=+158.818577297" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.924785 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" event={"ID":"e5b9a719-adef-4213-b931-3f20d44b90b7","Type":"ContainerStarted","Data":"8ad24fd47e921dffbcef945b59b2c6b04d900069deb497faede5126c8514db96"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.928953 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fcfzl" event={"ID":"871a005b-f47d-4e0c-b704-95ada1fa0584","Type":"ContainerStarted","Data":"6f6a0b1b5325665f77dbd5621c43f06b3428afc4474ef34b33f8203b0055c825"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.932356 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r647p" podStartSLOduration=128.932315304 podStartE2EDuration="2m8.932315304s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:38.926861036 +0000 UTC m=+158.834360499" watchObservedRunningTime="2025-10-02 11:30:38.932315304 +0000 UTC m=+158.839814767" Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.947245 4725 generic.go:334] "Generic (PLEG): container finished" podID="a0940542-0c18-472b-8fe9-2363f88ec264" containerID="e8c6965d7bf68a62c83232f515cd998ac271c9e0cf4d76299f1183501f4251ec" exitCode=0 Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.947354 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" event={"ID":"a0940542-0c18-472b-8fe9-2363f88ec264","Type":"ContainerDied","Data":"e8c6965d7bf68a62c83232f515cd998ac271c9e0cf4d76299f1183501f4251ec"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.953633 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dxw2d" event={"ID":"f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732","Type":"ContainerStarted","Data":"82d9c11144c90db9aee94ee2a5b23a6c77439f3605bdd7a874c1d6f9a58486a1"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.955710 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" event={"ID":"45a64c58-e326-4e39-a87b-94bf31b48c9d","Type":"ContainerStarted","Data":"a51a76609d5828d6c83d49179906e739f630850e329868568cd09be7d254563b"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.959989 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lpd8k" event={"ID":"dae3a03e-7663-46b9-9638-21d1eedc2f86","Type":"ContainerStarted","Data":"da31d1116756b8b3b09dba0453da18fcfd4fd1589fe671652c8f83763f0bc1bd"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.983803 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d52583d0ad53866a93d30eba0f9e08a4742b81a78114af9e4f0c6bd6141a3dce"} Oct 02 11:30:38 crc kubenswrapper[4725]: I1002 11:30:38.995757 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:38 crc kubenswrapper[4725]: E1002 11:30:38.997934 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:39.497914667 +0000 UTC m=+159.405414200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.007624 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" event={"ID":"7abb266f-c1ef-43e3-9aef-213be819dc8e","Type":"ContainerStarted","Data":"23042b9238e9ac4038291b5536f0250eaa581bac7c463d53dbfbde4237569fdb"} Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.008553 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-kq4vt" podStartSLOduration=129.008542228 podStartE2EDuration="2m9.008542228s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:39.008049765 +0000 UTC m=+158.915549248" watchObservedRunningTime="2025-10-02 11:30:39.008542228 +0000 UTC m=+158.916041691" Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.038921 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lpd8k" podStartSLOduration=6.038904109 podStartE2EDuration="6.038904109s" podCreationTimestamp="2025-10-02 11:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:39.038122587 +0000 UTC m=+158.945622050" watchObservedRunningTime="2025-10-02 11:30:39.038904109 +0000 UTC m=+158.946403572" Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.039538 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" event={"ID":"1019f382-13cb-47d6-ae1f-f4ea54bd3008","Type":"ContainerStarted","Data":"610207dbc0e2cf96904a486aa2990e3690040975f36d765567db14c7406d0df3"} Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.041129 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" event={"ID":"0d430bf0-eef2-4ec2-9941-57c5005e5931","Type":"ContainerStarted","Data":"35115f742e12bbcca514dcbacbbef3c74b5f3b2ec64e9b00c1d93c7480665c3f"} Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.043154 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" event={"ID":"b8342476-d06a-48c8-84de-89b1531728e1","Type":"ContainerStarted","Data":"c818d2be84b8f34945cece373a06369c02144720c562f5d6810e0b37cf72d9a1"} Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.070581 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" event={"ID":"37af674e-88c7-4b76-9a74-371e60757f7c","Type":"ContainerStarted","Data":"a6dd41648433ecc12888e88d862ce1d7ed27729ce64c3f218c39a75f88621add"} Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.071872 4725 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x5npd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.071913 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" podUID="b79b9e09-2453-4a58-af84-0732c5f7892d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.096564 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:39 crc kubenswrapper[4725]: E1002 11:30:39.098188 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:39.598169338 +0000 UTC m=+159.505668801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.109107 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dxw2d" podStartSLOduration=129.109084697 podStartE2EDuration="2m9.109084697s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:39.076460354 +0000 UTC m=+158.983959827" watchObservedRunningTime="2025-10-02 11:30:39.109084697 +0000 UTC m=+159.016584160" Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.110166 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dnp4" podStartSLOduration=129.110157606 podStartE2EDuration="2m9.110157606s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:39.10702882 +0000 UTC m=+159.014528293" watchObservedRunningTime="2025-10-02 11:30:39.110157606 +0000 UTC m=+159.017657069" Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.203408 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.203582 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.203618 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.204474 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:39 crc kubenswrapper[4725]: E1002 11:30:39.206786 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:39.706770587 +0000 UTC m=+159.614270050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.307174 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:39 crc kubenswrapper[4725]: E1002 11:30:39.308071 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:39.808052755 +0000 UTC m=+159.715552218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.409678 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:39 crc kubenswrapper[4725]: E1002 11:30:39.410098 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:39.910082474 +0000 UTC m=+159.817581937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.510451 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:39 crc kubenswrapper[4725]: E1002 11:30:39.510657 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.010626493 +0000 UTC m=+159.918125956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.510737 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:39 crc kubenswrapper[4725]: E1002 11:30:39.511075 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.011062404 +0000 UTC m=+159.918561867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.612355 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:39 crc kubenswrapper[4725]: E1002 11:30:39.612998 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.11298237 +0000 UTC m=+160.020481833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.715415 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:39 crc kubenswrapper[4725]: E1002 11:30:39.715972 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.215953104 +0000 UTC m=+160.123452647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.816120 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:39 crc kubenswrapper[4725]: E1002 11:30:39.816321 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.316290347 +0000 UTC m=+160.223789810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.816353 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:39 crc kubenswrapper[4725]: E1002 11:30:39.816714 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.316704588 +0000 UTC m=+160.224204121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:39 crc kubenswrapper[4725]: I1002 11:30:39.917202 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:39 crc kubenswrapper[4725]: E1002 11:30:39.917500 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.417486503 +0000 UTC m=+160.324985966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.018995 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.019548 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.519518082 +0000 UTC m=+160.427017605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.076770 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" event={"ID":"c1db9267-244a-40a4-ae74-8ce562f97a4c","Type":"ContainerStarted","Data":"ca5d0ef0477d4b2742d6195b19ec098fa3098d4388cf0d018d84e70ab915234f"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.080889 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fcfzl" event={"ID":"871a005b-f47d-4e0c-b704-95ada1fa0584","Type":"ContainerStarted","Data":"a44a1b77a6400e5ecff57c056e997795043989d6a64407129f755278b683950e"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.082097 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7f44ff9502170a7839bf957202a302ab25a5bce6e06dcd3208c232ba5ee9a59e"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.083481 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" event={"ID":"d3648e0c-583a-44a3-8b06-3d1b1ae1491b","Type":"ContainerStarted","Data":"26ec57720be550614cf9881640d37b7d8005463e169452c6ac397ce0b03e0b95"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.088108 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pzz9t" event={"ID":"3a3bc794-c04f-4e8d-9c4b-dd3502e017b3","Type":"ContainerStarted","Data":"1c46d393ed2585767564c856a711ab2a21e15168c8099c2b2583e8d3c0924217"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.089983 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" event={"ID":"f3ff8f8f-8453-4b51-83fa-9aeda104fbff","Type":"ContainerStarted","Data":"abfd20c8dec422d462c7cb8753e2852019c3ad25ead69a0301c9373a5ced1a2f"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.091842 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6" event={"ID":"825215a6-1ebc-426c-b54d-f54f6c261f55","Type":"ContainerStarted","Data":"717b72f48487fbcaff539bdd439fcd3fd25504f6c25b829e01275bd196c1f976"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.094421 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" event={"ID":"e5b9a719-adef-4213-b931-3f20d44b90b7","Type":"ContainerStarted","Data":"50933450f659e66175875c4c34164c9e9ef26b4f5fb4d21ff38a8e471718b895"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.096228 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" event={"ID":"b8342476-d06a-48c8-84de-89b1531728e1","Type":"ContainerStarted","Data":"aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.098029 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" event={"ID":"8fbda729-20cb-4a89-9295-da8fb53f7136","Type":"ContainerStarted","Data":"453cef900ea42b0bfd39ce980c75d244f992db43543f8b481d86fd6f879d5fa7"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.099364 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" event={"ID":"9a84f459-c277-472f-a84c-328e8523f8e0","Type":"ContainerStarted","Data":"d20eabf9be67f15819f802ac50bfec5e39aae79c66df44f857a48965421259b0"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.101683 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" event={"ID":"368d8e72-b80a-4336-b454-a73ea5f9a858","Type":"ContainerStarted","Data":"a6d7ff43dbce0010b62b4a0db7396789aec5923ea9129520b795bf0db86522cd"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.103288 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" event={"ID":"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f","Type":"ContainerStarted","Data":"d9a3b81a5323507dc949de6c7947c5995f72f0eea223b969f6ab1c723d946693"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.104660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" event={"ID":"41437658-6209-4ba0-8707-dc873b07d0f3","Type":"ContainerStarted","Data":"439e3766ccbffb07d0d84cc00818abde9be52a7c0072ddbe405c74191aa2f0ff"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.104765 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2sz4f" podStartSLOduration=130.104748792 podStartE2EDuration="2m10.104748792s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:39.191751246 +0000 UTC m=+159.099250709" watchObservedRunningTime="2025-10-02 11:30:40.104748792 +0000 UTC m=+160.012248255" Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.105829 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7v5h8" podStartSLOduration=129.105824741 podStartE2EDuration="2m9.105824741s" podCreationTimestamp="2025-10-02 11:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:40.10432 +0000 UTC m=+160.011819463" watchObservedRunningTime="2025-10-02 11:30:40.105824741 +0000 UTC m=+160.013324204" Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.106698 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" event={"ID":"e1135f15-46ec-4d45-8167-d810903ee497","Type":"ContainerStarted","Data":"fb363724d96b5edb6236f272ae6bd93b8d65900941e65850d75afa40ecb35eac"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.107957 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" event={"ID":"0d430bf0-eef2-4ec2-9941-57c5005e5931","Type":"ContainerStarted","Data":"7679ae9a2de69d78935c0ace74f84efa06337891a74f6701ef43beec2fd5755e"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.109164 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.110516 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sgktv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.110554 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" podUID="0d430bf0-eef2-4ec2-9941-57c5005e5931" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.112955 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" event={"ID":"45a64c58-e326-4e39-a87b-94bf31b48c9d","Type":"ContainerStarted","Data":"18db159f6fe1ade0517dc2cb8003d1ff65597553ca1344bbb8fc345eaf1173ab"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.114298 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" event={"ID":"538365c3-9ba5-4fbd-aca1-05525f5a3250","Type":"ContainerStarted","Data":"931b89d04215239ada58d3db71b9d01cc5495fb373692a14928b165dc693e147"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.117213 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" event={"ID":"e96f50b0-1c58-4c09-8554-16c1104a7298","Type":"ContainerStarted","Data":"4410ba3f1525fb114e5f39f4d5835957e1e92bf69e468d9a062595297799ad1c"} Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.117779 4725 patch_prober.go:28] interesting pod/console-operator-58897d9998-p4dgg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.117823 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-p4dgg" podUID="01689a82-0e94-4995-a494-6c8bc2116e93" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.118634 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.120199 4725 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-574ct container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.120237 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.120253 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" podUID="1019f382-13cb-47d6-ae1f-f4ea54bd3008" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.120816 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.62079448 +0000 UTC m=+160.528293983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.131369 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" podStartSLOduration=129.131349129 podStartE2EDuration="2m9.131349129s" podCreationTimestamp="2025-10-02 11:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:40.129838558 +0000 UTC m=+160.037338031" watchObservedRunningTime="2025-10-02 11:30:40.131349129 +0000 UTC m=+160.038848592" Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.206597 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.206642 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.221892 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.224343 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.72433195 +0000 UTC m=+160.631831413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.323157 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.323591 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.823571903 +0000 UTC m=+160.731071366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.426407 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.426836 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:40.926814325 +0000 UTC m=+160.834313868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.528209 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.528511 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.028495534 +0000 UTC m=+160.935994997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.629426 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.629788 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.129773033 +0000 UTC m=+161.037272496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.730694 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.730955 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.230890026 +0000 UTC m=+161.138389489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.731116 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.731407 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.23139624 +0000 UTC m=+161.138895773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.832680 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.832890 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.332862094 +0000 UTC m=+161.240361567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.833291 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.833865 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.333853081 +0000 UTC m=+161.241352544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.934493 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.934785 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.434755839 +0000 UTC m=+161.342255302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:40 crc kubenswrapper[4725]: I1002 11:30:40.934859 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:40 crc kubenswrapper[4725]: E1002 11:30:40.935214 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.435203751 +0000 UTC m=+161.342703284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.036416 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.036641 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.536612402 +0000 UTC m=+161.444111865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.036794 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.037136 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.537122917 +0000 UTC m=+161.444622380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.123895 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2c11fcbba093e16ed184e0dde798458b0e443872099efa7ad6ed5417fed08ea3"} Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.124123 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.126205 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" event={"ID":"a0940542-0c18-472b-8fe9-2363f88ec264","Type":"ContainerStarted","Data":"b233f2f03f1284f57c8bef5e0b339f1c7dc356385ad879caebea707d7f3efbb0"} Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.127582 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3ad27b4339bb6b09199e093d8fd0aaf737243fb629052dbd4622efa8407b2db1"} Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.129518 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" event={"ID":"c36f3900-450a-437f-9fda-b3c7ccf6b4be","Type":"ContainerStarted","Data":"3a49c146938da0bea61cf509267f8e93aa09393cf51038d23dfd6a96a4cb523d"} Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.130903 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"086c257dc04d17d2cf492c95da6a2206c9ce91cc9f047442128458b07fb4d1c6"} Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.132465 4725 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-574ct container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.132529 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" podUID="1019f382-13cb-47d6-ae1f-f4ea54bd3008" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.133106 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.133168 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.133184 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.133255 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sgktv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.133323 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" podUID="0d430bf0-eef2-4ec2-9941-57c5005e5931" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.134854 4725 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-6gs7l container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.134871 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m5sv8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.134904 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" podUID="9a84f459-c277-472f-a84c-328e8523f8e0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.134903 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" podUID="b8342476-d06a-48c8-84de-89b1531728e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.135146 4725 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-l6fgp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.135206 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" podUID="f3ff8f8f-8453-4b51-83fa-9aeda104fbff" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.137563 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.137849 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.63783645 +0000 UTC m=+161.545335913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.143913 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" podStartSLOduration=131.143901096 podStartE2EDuration="2m11.143901096s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:40.189269682 +0000 UTC m=+160.096769145" watchObservedRunningTime="2025-10-02 11:30:41.143901096 +0000 UTC m=+161.051400569" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.159359 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hrs6" podStartSLOduration=131.159337267 podStartE2EDuration="2m11.159337267s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.15795437 +0000 UTC m=+161.065453863" watchObservedRunningTime="2025-10-02 11:30:41.159337267 +0000 UTC m=+161.066836730" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.177999 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" podStartSLOduration=131.177978857 podStartE2EDuration="2m11.177978857s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.174470471 +0000 UTC m=+161.081969944" watchObservedRunningTime="2025-10-02 11:30:41.177978857 +0000 UTC m=+161.085478320" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.196222 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" podStartSLOduration=130.196177834 podStartE2EDuration="2m10.196177834s" podCreationTimestamp="2025-10-02 11:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.193061579 +0000 UTC m=+161.100561042" watchObservedRunningTime="2025-10-02 11:30:41.196177834 +0000 UTC m=+161.103677297" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.239091 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.240387 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fqgld" podStartSLOduration=131.240369542 podStartE2EDuration="2m11.240369542s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.238043518 +0000 UTC m=+161.145543001" watchObservedRunningTime="2025-10-02 11:30:41.240369542 +0000 UTC m=+161.147869005" Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.241828 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.741814922 +0000 UTC m=+161.649314455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.257980 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-smwg9" podStartSLOduration=131.257961173 podStartE2EDuration="2m11.257961173s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.256607027 +0000 UTC m=+161.164106490" watchObservedRunningTime="2025-10-02 11:30:41.257961173 +0000 UTC m=+161.165460636" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.272475 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bstp2" podStartSLOduration=130.27245493 podStartE2EDuration="2m10.27245493s" podCreationTimestamp="2025-10-02 11:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.271634237 +0000 UTC m=+161.179133700" watchObservedRunningTime="2025-10-02 11:30:41.27245493 +0000 UTC m=+161.179954393" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.296557 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vxnwq" podStartSLOduration=131.296535637 podStartE2EDuration="2m11.296535637s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.293483874 +0000 UTC m=+161.200983347" watchObservedRunningTime="2025-10-02 11:30:41.296535637 +0000 UTC m=+161.204035100" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.312553 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" podStartSLOduration=41.312534694 podStartE2EDuration="41.312534694s" podCreationTimestamp="2025-10-02 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.311207789 +0000 UTC m=+161.218707262" watchObservedRunningTime="2025-10-02 11:30:41.312534694 +0000 UTC m=+161.220034167" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.329653 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" podStartSLOduration=130.329636113 podStartE2EDuration="2m10.329636113s" podCreationTimestamp="2025-10-02 11:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.328681556 +0000 UTC m=+161.236181049" watchObservedRunningTime="2025-10-02 11:30:41.329636113 +0000 UTC m=+161.237135576" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.340483 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.340885 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.840853439 +0000 UTC m=+161.748352912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.365061 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" podStartSLOduration=130.36503982 podStartE2EDuration="2m10.36503982s" podCreationTimestamp="2025-10-02 11:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.348932029 +0000 UTC m=+161.256431492" watchObservedRunningTime="2025-10-02 11:30:41.36503982 +0000 UTC m=+161.272539293" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.365416 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" podStartSLOduration=131.36541132 podStartE2EDuration="2m11.36541132s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.362650255 +0000 UTC m=+161.270149718" watchObservedRunningTime="2025-10-02 11:30:41.36541132 +0000 UTC m=+161.272910783" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.379399 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4554v" podStartSLOduration=131.379383382 podStartE2EDuration="2m11.379383382s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.376320948 +0000 UTC m=+161.283820411" watchObservedRunningTime="2025-10-02 11:30:41.379383382 +0000 UTC m=+161.286882845" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.390455 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pzz9t" podStartSLOduration=8.390436384000001 podStartE2EDuration="8.390436384s" podCreationTimestamp="2025-10-02 11:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.387859003 +0000 UTC m=+161.295358466" watchObservedRunningTime="2025-10-02 11:30:41.390436384 +0000 UTC m=+161.297935847" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.404007 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ps6mz" podStartSLOduration=131.403985315 podStartE2EDuration="2m11.403985315s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:41.402740421 +0000 UTC m=+161.310239894" watchObservedRunningTime="2025-10-02 11:30:41.403985315 +0000 UTC m=+161.311484778" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.442126 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.442701 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:41.942681262 +0000 UTC m=+161.850180785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.543516 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.543749 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.043699564 +0000 UTC m=+161.951199037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.543809 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.544120 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.044106465 +0000 UTC m=+161.951605928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.610711 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:41 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:41 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:41 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.610820 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.644873 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.645031 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.145010133 +0000 UTC m=+162.052509596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.645136 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.645507 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.145497956 +0000 UTC m=+162.052997419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.680322 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.746676 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.746867 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.246839126 +0000 UTC m=+162.154338589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.747115 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.747449 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.247437492 +0000 UTC m=+162.154937035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.848210 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.848384 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.348361901 +0000 UTC m=+162.255861374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.848408 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.848673 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.348665249 +0000 UTC m=+162.256164712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.949668 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.949825 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.449800244 +0000 UTC m=+162.357299707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:41 crc kubenswrapper[4725]: I1002 11:30:41.949908 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:41 crc kubenswrapper[4725]: E1002 11:30:41.950204 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.450196805 +0000 UTC m=+162.357696268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.051241 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.051431 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.551405961 +0000 UTC m=+162.458905424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.051484 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.051928 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.551909375 +0000 UTC m=+162.459408848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.139932 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" event={"ID":"0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f","Type":"ContainerStarted","Data":"d200b178c5c00c406f5944e728a0706ccc058372a5ec154c30d15e2bb631ee84"} Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.142251 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" event={"ID":"368d8e72-b80a-4336-b454-a73ea5f9a858","Type":"ContainerStarted","Data":"dfbce302e7f32e9d6a491d1c207280078399db32fbbb788eef8f9cf3d1d91183"} Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.144216 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" event={"ID":"c1db9267-244a-40a4-ae74-8ce562f97a4c","Type":"ContainerStarted","Data":"89a069aab73fec286116ecfbe122419de83cf2cbca5186606f413d9afd808788"} Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.146656 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fcfzl" event={"ID":"871a005b-f47d-4e0c-b704-95ada1fa0584","Type":"ContainerStarted","Data":"f9dda494ed1620f4c9cf01112c4b82fc42a0f60eecf3271e9f0bc0a36832f770"} Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.147317 4725 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-l6fgp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.147328 4725 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-6gs7l container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.147355 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" podUID="f3ff8f8f-8453-4b51-83fa-9aeda104fbff" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.147373 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" podUID="b8342476-d06a-48c8-84de-89b1531728e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.147447 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m5sv8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.147484 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" podUID="9a84f459-c277-472f-a84c-328e8523f8e0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.147870 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sgktv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.147918 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" podUID="0d430bf0-eef2-4ec2-9941-57c5005e5931" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.152016 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.152139 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.652122694 +0000 UTC m=+162.559622157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.152292 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.152544 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.652536925 +0000 UTC m=+162.560036388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.203129 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:42 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:42 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:42 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.203185 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.253080 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.253207 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.753190556 +0000 UTC m=+162.660690019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.253523 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.254144 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.754119352 +0000 UTC m=+162.661618905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.354876 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.355266 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.855237876 +0000 UTC m=+162.762737349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.456759 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.457694 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:42.957675775 +0000 UTC m=+162.865175248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.558813 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.558914 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.058898733 +0000 UTC m=+162.966398196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.559219 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.559635 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.059615832 +0000 UTC m=+162.967115365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.660352 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.660501 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.160481109 +0000 UTC m=+163.067980572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.660640 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.660985 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.160975503 +0000 UTC m=+163.068474966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.761732 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.761903 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.261867791 +0000 UTC m=+163.169367274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.761987 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.762391 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.262378464 +0000 UTC m=+163.169878007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.863372 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.863876 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.363859808 +0000 UTC m=+163.271359271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:42 crc kubenswrapper[4725]: I1002 11:30:42.965548 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:42 crc kubenswrapper[4725]: E1002 11:30:42.965975 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.465959599 +0000 UTC m=+163.373459062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.066908 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.067100 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.567065773 +0000 UTC m=+163.474565236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.067565 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.067909 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.567900355 +0000 UTC m=+163.475399818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.152018 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.169252 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.169423 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.66939191 +0000 UTC m=+163.576891383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.169515 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.169805 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.669793361 +0000 UTC m=+163.577292824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.172350 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" podStartSLOduration=132.17232831 podStartE2EDuration="2m12.17232831s" podCreationTimestamp="2025-10-02 11:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:43.170946082 +0000 UTC m=+163.078445545" watchObservedRunningTime="2025-10-02 11:30:43.17232831 +0000 UTC m=+163.079827773" Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.192887 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" podStartSLOduration=133.192866951 podStartE2EDuration="2m13.192866951s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:43.191609227 +0000 UTC m=+163.099108690" watchObservedRunningTime="2025-10-02 11:30:43.192866951 +0000 UTC m=+163.100366414" Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.202541 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:43 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:43 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:43 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.202601 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.211053 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vj4zt" podStartSLOduration=133.211038318 podStartE2EDuration="2m13.211038318s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:43.206815582 +0000 UTC m=+163.114315045" watchObservedRunningTime="2025-10-02 11:30:43.211038318 +0000 UTC m=+163.118537781" Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.277230 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.277465 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.777430473 +0000 UTC m=+163.684929946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.379689 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.380772 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.880756037 +0000 UTC m=+163.788255510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.480591 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.480905 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:43.980889744 +0000 UTC m=+163.888389207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.582114 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.582446 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.08243294 +0000 UTC m=+163.989932403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.682999 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.683206 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.183175314 +0000 UTC m=+164.090674787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.683590 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.683951 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.183940304 +0000 UTC m=+164.091439837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.784361 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.784686 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.284672277 +0000 UTC m=+164.192171730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.885635 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.886020 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.386006267 +0000 UTC m=+164.293505730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.986410 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.986607 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.486576076 +0000 UTC m=+164.394075549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:43 crc kubenswrapper[4725]: I1002 11:30:43.986695 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:43 crc kubenswrapper[4725]: E1002 11:30:43.987037 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.487026469 +0000 UTC m=+164.394526002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.088218 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.088342 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.588316347 +0000 UTC m=+164.495815810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.088521 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.088910 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.588895313 +0000 UTC m=+164.496394786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.137773 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fcfzl" podStartSLOduration=11.137756898 podStartE2EDuration="11.137756898s" podCreationTimestamp="2025-10-02 11:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:43.234334375 +0000 UTC m=+163.141833828" watchObservedRunningTime="2025-10-02 11:30:44.137756898 +0000 UTC m=+164.045256351" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.138450 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.139151 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.141003 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.141358 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.150444 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.156565 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" event={"ID":"e6e25413-d1ac-4132-80bd-2119f36405cb","Type":"ContainerStarted","Data":"f1b5764ad1280ca4490d7bad1b2550bb74b4c95a976785ce0e9c5d06e8ffb505"} Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.158177 4725 generic.go:334] "Generic (PLEG): container finished" podID="538365c3-9ba5-4fbd-aca1-05525f5a3250" containerID="931b89d04215239ada58d3db71b9d01cc5495fb373692a14928b165dc693e147" exitCode=0 Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.158241 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" event={"ID":"538365c3-9ba5-4fbd-aca1-05525f5a3250","Type":"ContainerDied","Data":"931b89d04215239ada58d3db71b9d01cc5495fb373692a14928b165dc693e147"} Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.189992 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.190169 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.690145271 +0000 UTC m=+164.597644744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.190266 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a448dc2a-523e-40ce-a363-fd6c1d64b700-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a448dc2a-523e-40ce-a363-fd6c1d64b700\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.190338 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.190383 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a448dc2a-523e-40ce-a363-fd6c1d64b700-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a448dc2a-523e-40ce-a363-fd6c1d64b700\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.190783 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.690768887 +0000 UTC m=+164.598268350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.203622 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:44 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:44 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:44 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.203682 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.292081 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.292207 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.79218556 +0000 UTC m=+164.699685033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.292440 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a448dc2a-523e-40ce-a363-fd6c1d64b700-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a448dc2a-523e-40ce-a363-fd6c1d64b700\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.292531 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.292851 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.792838658 +0000 UTC m=+164.700338121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.293041 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a448dc2a-523e-40ce-a363-fd6c1d64b700-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a448dc2a-523e-40ce-a363-fd6c1d64b700\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.293118 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a448dc2a-523e-40ce-a363-fd6c1d64b700-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a448dc2a-523e-40ce-a363-fd6c1d64b700\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.315170 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a448dc2a-523e-40ce-a363-fd6c1d64b700-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a448dc2a-523e-40ce-a363-fd6c1d64b700\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.394749 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.395101 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.895082392 +0000 UTC m=+164.802581865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.402742 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-vdrxk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.402799 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vdrxk" podUID="ac6f43e5-03b3-49a8-9e46-7c607c06f40c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.403399 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-vdrxk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.403424 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vdrxk" podUID="ac6f43e5-03b3-49a8-9e46-7c607c06f40c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.454773 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.496359 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.496767 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:44.996752062 +0000 UTC m=+164.904251525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.600235 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.600610 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.10059182 +0000 UTC m=+165.008091283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.701782 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.702290 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.202279479 +0000 UTC m=+165.109778942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.761454 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.789166 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kf8zm"] Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.790027 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.795349 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.802639 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.802830 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.302812047 +0000 UTC m=+165.210311510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.802988 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.803295 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.30328392 +0000 UTC m=+165.210783383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.804228 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kf8zm"] Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.903807 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.903942 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.40392157 +0000 UTC m=+165.311421033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.904022 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-catalog-content\") pod \"certified-operators-kf8zm\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.904047 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jzw5\" (UniqueName: \"kubernetes.io/projected/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-kube-api-access-9jzw5\") pod \"certified-operators-kf8zm\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.904079 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-utilities\") pod \"certified-operators-kf8zm\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.904116 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:44 crc kubenswrapper[4725]: E1002 11:30:44.904349 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.404340672 +0000 UTC m=+165.311840135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.913319 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wgm9g" Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.978411 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:30:44 crc kubenswrapper[4725]: I1002 11:30:44.978483 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.004690 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.004839 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.504814189 +0000 UTC m=+165.412313652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.004935 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-utilities\") pod \"certified-operators-kf8zm\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.004993 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.005149 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-catalog-content\") pod \"certified-operators-kf8zm\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.005179 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jzw5\" (UniqueName: \"kubernetes.io/projected/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-kube-api-access-9jzw5\") pod \"certified-operators-kf8zm\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.005291 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-utilities\") pod \"certified-operators-kf8zm\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.005355 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.505341673 +0000 UTC m=+165.412841206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.005691 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-catalog-content\") pod \"certified-operators-kf8zm\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.006973 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zxc7r"] Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.007765 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.015155 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.029679 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jzw5\" (UniqueName: \"kubernetes.io/projected/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-kube-api-access-9jzw5\") pod \"certified-operators-kf8zm\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.037871 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxc7r"] Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.106520 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.106825 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvbws\" (UniqueName: \"kubernetes.io/projected/9c33e8b3-403f-47e9-8323-f31c5c5195d7-kube-api-access-bvbws\") pod \"community-operators-zxc7r\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.106850 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-utilities\") pod \"community-operators-zxc7r\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.106910 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-catalog-content\") pod \"community-operators-zxc7r\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.107010 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.606995492 +0000 UTC m=+165.514494955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.163227 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a448dc2a-523e-40ce-a363-fd6c1d64b700","Type":"ContainerStarted","Data":"e074b7068fc59b52a73437afddcbbcd3ff65736c7ca4c6b62e818ea5920cae8a"} Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.170831 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.178924 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h5vpb"] Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.179741 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.200503 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h5vpb"] Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.211398 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvbws\" (UniqueName: \"kubernetes.io/projected/9c33e8b3-403f-47e9-8323-f31c5c5195d7-kube-api-access-bvbws\") pod \"community-operators-zxc7r\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.211443 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-utilities\") pod \"community-operators-zxc7r\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.211487 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.211525 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-catalog-content\") pod \"community-operators-zxc7r\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.211913 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-catalog-content\") pod \"community-operators-zxc7r\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.212886 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.712864935 +0000 UTC m=+165.620364398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.213735 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:45 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:45 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:45 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.213825 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.232124 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-utilities\") pod \"community-operators-zxc7r\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.245338 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvbws\" (UniqueName: \"kubernetes.io/projected/9c33e8b3-403f-47e9-8323-f31c5c5195d7-kube-api-access-bvbws\") pod \"community-operators-zxc7r\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.312399 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.312576 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-catalog-content\") pod \"certified-operators-h5vpb\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.312625 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.812591081 +0000 UTC m=+165.720090554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.312692 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.312870 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgmj\" (UniqueName: \"kubernetes.io/projected/883bf189-bd31-45eb-ac44-35fe1b065be7-kube-api-access-dwgmj\") pod \"certified-operators-h5vpb\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.312917 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-utilities\") pod \"certified-operators-h5vpb\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.313034 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.813023603 +0000 UTC m=+165.720523066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.330571 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.380986 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l2wnr"] Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.384831 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.391560 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2wnr"] Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.413476 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.413613 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwgmj\" (UniqueName: \"kubernetes.io/projected/883bf189-bd31-45eb-ac44-35fe1b065be7-kube-api-access-dwgmj\") pod \"certified-operators-h5vpb\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.413639 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-utilities\") pod \"certified-operators-h5vpb\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.413673 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-catalog-content\") pod \"certified-operators-h5vpb\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.413711 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.414194 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:45.914176098 +0000 UTC m=+165.821675561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.416133 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-utilities\") pod \"certified-operators-h5vpb\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.416534 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-catalog-content\") pod \"certified-operators-h5vpb\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.437742 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwgmj\" (UniqueName: \"kubernetes.io/projected/883bf189-bd31-45eb-ac44-35fe1b065be7-kube-api-access-dwgmj\") pod \"certified-operators-h5vpb\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.443577 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kf8zm"] Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.515276 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/538365c3-9ba5-4fbd-aca1-05525f5a3250-secret-volume\") pod \"538365c3-9ba5-4fbd-aca1-05525f5a3250\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.515330 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgv4v\" (UniqueName: \"kubernetes.io/projected/538365c3-9ba5-4fbd-aca1-05525f5a3250-kube-api-access-bgv4v\") pod \"538365c3-9ba5-4fbd-aca1-05525f5a3250\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.515357 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538365c3-9ba5-4fbd-aca1-05525f5a3250-config-volume\") pod \"538365c3-9ba5-4fbd-aca1-05525f5a3250\" (UID: \"538365c3-9ba5-4fbd-aca1-05525f5a3250\") " Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.515503 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-catalog-content\") pod \"community-operators-l2wnr\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.515556 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.515600 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbqs2\" (UniqueName: \"kubernetes.io/projected/affba198-b83b-4263-a35d-ef5a3cc41852-kube-api-access-mbqs2\") pod \"community-operators-l2wnr\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.515629 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-utilities\") pod \"community-operators-l2wnr\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.516500 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538365c3-9ba5-4fbd-aca1-05525f5a3250-config-volume" (OuterVolumeSpecName: "config-volume") pod "538365c3-9ba5-4fbd-aca1-05525f5a3250" (UID: "538365c3-9ba5-4fbd-aca1-05525f5a3250"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.516869 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.016852564 +0000 UTC m=+165.924352107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.519946 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/538365c3-9ba5-4fbd-aca1-05525f5a3250-kube-api-access-bgv4v" (OuterVolumeSpecName: "kube-api-access-bgv4v") pod "538365c3-9ba5-4fbd-aca1-05525f5a3250" (UID: "538365c3-9ba5-4fbd-aca1-05525f5a3250"). InnerVolumeSpecName "kube-api-access-bgv4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.519987 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538365c3-9ba5-4fbd-aca1-05525f5a3250-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "538365c3-9ba5-4fbd-aca1-05525f5a3250" (UID: "538365c3-9ba5-4fbd-aca1-05525f5a3250"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.539137 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.546525 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.575938 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxc7r"] Oct 02 11:30:45 crc kubenswrapper[4725]: W1002 11:30:45.581354 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c33e8b3_403f_47e9_8323_f31c5c5195d7.slice/crio-51a96ca2f818ce4ac8bfbc9d6d3a98c7781634293644d8e5394c7a5b6de04eef WatchSource:0}: Error finding container 51a96ca2f818ce4ac8bfbc9d6d3a98c7781634293644d8e5394c7a5b6de04eef: Status 404 returned error can't find the container with id 51a96ca2f818ce4ac8bfbc9d6d3a98c7781634293644d8e5394c7a5b6de04eef Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.617363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.617502 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.117484565 +0000 UTC m=+166.024984028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.617582 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-utilities\") pod \"community-operators-l2wnr\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.617656 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-catalog-content\") pod \"community-operators-l2wnr\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.617786 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.617862 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbqs2\" (UniqueName: \"kubernetes.io/projected/affba198-b83b-4263-a35d-ef5a3cc41852-kube-api-access-mbqs2\") pod \"community-operators-l2wnr\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.617918 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/538365c3-9ba5-4fbd-aca1-05525f5a3250-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.617934 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgv4v\" (UniqueName: \"kubernetes.io/projected/538365c3-9ba5-4fbd-aca1-05525f5a3250-kube-api-access-bgv4v\") on node \"crc\" DevicePath \"\"" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.617950 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538365c3-9ba5-4fbd-aca1-05525f5a3250-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.618073 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-utilities\") pod \"community-operators-l2wnr\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.618391 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.11838301 +0000 UTC m=+166.025882473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.618557 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-catalog-content\") pod \"community-operators-l2wnr\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.637422 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbqs2\" (UniqueName: \"kubernetes.io/projected/affba198-b83b-4263-a35d-ef5a3cc41852-kube-api-access-mbqs2\") pod \"community-operators-l2wnr\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.642910 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.643215 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.709401 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.718978 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.719151 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.219128944 +0000 UTC m=+166.126628407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.719216 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.719477 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.219468403 +0000 UTC m=+166.126967866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.733147 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.751804 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h5vpb"] Oct 02 11:30:45 crc kubenswrapper[4725]: W1002 11:30:45.781181 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod883bf189_bd31_45eb_ac44_35fe1b065be7.slice/crio-e1f8d3adac9b52e130202dc6e42f6f02fefdafd31596d6dc149023cafbaf1902 WatchSource:0}: Error finding container e1f8d3adac9b52e130202dc6e42f6f02fefdafd31596d6dc149023cafbaf1902: Status 404 returned error can't find the container with id e1f8d3adac9b52e130202dc6e42f6f02fefdafd31596d6dc149023cafbaf1902 Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.821200 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.821427 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.321400959 +0000 UTC m=+166.228900422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.821535 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.821928 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.321914043 +0000 UTC m=+166.229413506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.862227 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-p4dgg" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.923485 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:45 crc kubenswrapper[4725]: E1002 11:30:45.925115 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.425094434 +0000 UTC m=+166.332593907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.947255 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.949473 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.955501 4725 patch_prober.go:28] interesting pod/console-f9d7485db-kq4vt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.955548 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kq4vt" podUID="a7e4bee1-a3eb-4e37-bfcb-99350ce66859" containerName="console" probeResult="failure" output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.987952 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.988086 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:45 crc kubenswrapper[4725]: I1002 11:30:45.994890 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.023765 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2wnr"] Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.025332 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.028274 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.528252452 +0000 UTC m=+166.435751975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: W1002 11:30:46.030361 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaffba198_b83b_4263_a35d_ef5a3cc41852.slice/crio-0777d591079d398f573c2cfc9b3889caecfa4bd704b42407fc2838cc1511262c WatchSource:0}: Error finding container 0777d591079d398f573c2cfc9b3889caecfa4bd704b42407fc2838cc1511262c: Status 404 returned error can't find the container with id 0777d591079d398f573c2cfc9b3889caecfa4bd704b42407fc2838cc1511262c Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.126261 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.126417 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.626388846 +0000 UTC m=+166.533888299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.127367 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.127696 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.627684651 +0000 UTC m=+166.535184124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.129992 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.170402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a448dc2a-523e-40ce-a363-fd6c1d64b700","Type":"ContainerStarted","Data":"57b2bb91b496bcf8aa3269c533a0fa1f74418bfb2ccfcbab4273e804ad73c140"} Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.172133 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kf8zm" event={"ID":"1daaf185-adc6-4e07-a8ea-a22ffd4c505a","Type":"ContainerStarted","Data":"3a5f734c75adb7b25734153b763bd0032a2fc9c8f61a3a8c865f8d26ab4bd594"} Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.173550 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wnr" event={"ID":"affba198-b83b-4263-a35d-ef5a3cc41852","Type":"ContainerStarted","Data":"0777d591079d398f573c2cfc9b3889caecfa4bd704b42407fc2838cc1511262c"} Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.174939 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxc7r" event={"ID":"9c33e8b3-403f-47e9-8323-f31c5c5195d7","Type":"ContainerStarted","Data":"51a96ca2f818ce4ac8bfbc9d6d3a98c7781634293644d8e5394c7a5b6de04eef"} Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.178817 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" event={"ID":"538365c3-9ba5-4fbd-aca1-05525f5a3250","Type":"ContainerDied","Data":"1ca929e006691a6f77f5e3f0a5535fce03a97d4f69e2aa678572f87e44a6683f"} Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.178842 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ca929e006691a6f77f5e3f0a5535fce03a97d4f69e2aa678572f87e44a6683f" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.178891 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.181741 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-574ct" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.184420 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5vpb" event={"ID":"883bf189-bd31-45eb-ac44-35fe1b065be7","Type":"ContainerStarted","Data":"e1f8d3adac9b52e130202dc6e42f6f02fefdafd31596d6dc149023cafbaf1902"} Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.192483 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b5cg7" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.199513 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.206189 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:46 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:46 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:46 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.206254 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.209640 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgktv" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.214282 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l6fgp" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.229646 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.231326 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.730936183 +0000 UTC m=+166.638435646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.237702 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.239036 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.738983273 +0000 UTC m=+166.646482836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.258358 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.258870 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538365c3-9ba5-4fbd-aca1-05525f5a3250" containerName="collect-profiles" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.258891 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="538365c3-9ba5-4fbd-aca1-05525f5a3250" containerName="collect-profiles" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.259049 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="538365c3-9ba5-4fbd-aca1-05525f5a3250" containerName="collect-profiles" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.259510 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.261740 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.261900 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.268674 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.341414 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.341620 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38172d5-48b9-4461-aa8b-7f4384bbed88-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b38172d5-48b9-4461-aa8b-7f4384bbed88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.341670 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38172d5-48b9-4461-aa8b-7f4384bbed88-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b38172d5-48b9-4461-aa8b-7f4384bbed88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.342035 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.842014079 +0000 UTC m=+166.749513542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.442966 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.443038 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38172d5-48b9-4461-aa8b-7f4384bbed88-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b38172d5-48b9-4461-aa8b-7f4384bbed88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.443058 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38172d5-48b9-4461-aa8b-7f4384bbed88-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b38172d5-48b9-4461-aa8b-7f4384bbed88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.443131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38172d5-48b9-4461-aa8b-7f4384bbed88-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b38172d5-48b9-4461-aa8b-7f4384bbed88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.443458 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:46.943441141 +0000 UTC m=+166.850940614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.460584 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38172d5-48b9-4461-aa8b-7f4384bbed88-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b38172d5-48b9-4461-aa8b-7f4384bbed88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.544139 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.544536 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.044522814 +0000 UTC m=+166.952022277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.546410 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.575098 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.645978 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.646307 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.146292296 +0000 UTC m=+167.053791759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.747023 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.747268 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.247236225 +0000 UTC m=+167.154735698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.747342 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.747661 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.247648846 +0000 UTC m=+167.155148309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.847941 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.848111 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.348090002 +0000 UTC m=+167.255589465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.848326 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.848625 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.348615336 +0000 UTC m=+167.256114789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.949928 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.950119 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.45009905 +0000 UTC m=+167.357598513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.950179 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:46 crc kubenswrapper[4725]: E1002 11:30:46.950531 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.450520592 +0000 UTC m=+167.358020065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.962842 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 02 11:30:46 crc kubenswrapper[4725]: W1002 11:30:46.976271 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb38172d5_48b9_4461_aa8b_7f4384bbed88.slice/crio-dd590b953bbaa9180e5037a09584ff2b084d6387f0d5e82b2c9490a5ba40562e WatchSource:0}: Error finding container dd590b953bbaa9180e5037a09584ff2b084d6387f0d5e82b2c9490a5ba40562e: Status 404 returned error can't find the container with id dd590b953bbaa9180e5037a09584ff2b084d6387f0d5e82b2c9490a5ba40562e Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.990305 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-btvqf"] Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.991348 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:30:46 crc kubenswrapper[4725]: I1002 11:30:46.993936 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.004495 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btvqf"] Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.051354 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.051815 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-catalog-content\") pod \"redhat-marketplace-btvqf\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.051865 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-utilities\") pod \"redhat-marketplace-btvqf\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.051901 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfn79\" (UniqueName: \"kubernetes.io/projected/f796b79e-6656-4260-8a9c-7ed986582af9-kube-api-access-vfn79\") pod \"redhat-marketplace-btvqf\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.051983 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.551970745 +0000 UTC m=+167.459470208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.152864 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-catalog-content\") pod \"redhat-marketplace-btvqf\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.152912 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.152942 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-utilities\") pod \"redhat-marketplace-btvqf\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.152981 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfn79\" (UniqueName: \"kubernetes.io/projected/f796b79e-6656-4260-8a9c-7ed986582af9-kube-api-access-vfn79\") pod \"redhat-marketplace-btvqf\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.153201 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.653188602 +0000 UTC m=+167.560688065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.153361 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-catalog-content\") pod \"redhat-marketplace-btvqf\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.153457 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-utilities\") pod \"redhat-marketplace-btvqf\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.177042 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfn79\" (UniqueName: \"kubernetes.io/projected/f796b79e-6656-4260-8a9c-7ed986582af9-kube-api-access-vfn79\") pod \"redhat-marketplace-btvqf\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.189219 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kf8zm" event={"ID":"1daaf185-adc6-4e07-a8ea-a22ffd4c505a","Type":"ContainerStarted","Data":"bb962986eb150dc5b190b1c03f87f1e7d5ee2a02e825f48f773556892e129e3d"} Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.189955 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b38172d5-48b9-4461-aa8b-7f4384bbed88","Type":"ContainerStarted","Data":"dd590b953bbaa9180e5037a09584ff2b084d6387f0d5e82b2c9490a5ba40562e"} Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.209398 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:47 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:47 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:47 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.209496 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.209986 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.209965873 podStartE2EDuration="3.209965873s" podCreationTimestamp="2025-10-02 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:47.20728364 +0000 UTC m=+167.114783143" watchObservedRunningTime="2025-10-02 11:30:47.209965873 +0000 UTC m=+167.117465346" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.253980 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.254152 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.75412339 +0000 UTC m=+167.661622853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.254337 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.254756 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.754741317 +0000 UTC m=+167.662240790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.343261 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.358420 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.358616 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.858587186 +0000 UTC m=+167.766086649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.358897 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.359200 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.859193482 +0000 UTC m=+167.766692945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.382330 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f5wcl"] Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.383770 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.397965 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f5wcl"] Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.459589 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.459901 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.959868924 +0000 UTC m=+167.867368387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.460001 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.460077 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-catalog-content\") pod \"redhat-marketplace-f5wcl\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.460102 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk95m\" (UniqueName: \"kubernetes.io/projected/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-kube-api-access-vk95m\") pod \"redhat-marketplace-f5wcl\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.460118 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-utilities\") pod \"redhat-marketplace-f5wcl\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.460461 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:47.960435089 +0000 UTC m=+167.867934552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.561710 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.561934 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.061907033 +0000 UTC m=+167.969406496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.562245 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.562333 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-catalog-content\") pod \"redhat-marketplace-f5wcl\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.562370 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk95m\" (UniqueName: \"kubernetes.io/projected/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-kube-api-access-vk95m\") pod \"redhat-marketplace-f5wcl\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.562400 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-utilities\") pod \"redhat-marketplace-f5wcl\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.562624 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.062608432 +0000 UTC m=+167.970107895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.562800 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-catalog-content\") pod \"redhat-marketplace-f5wcl\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.563153 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-utilities\") pod \"redhat-marketplace-f5wcl\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.569965 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btvqf"] Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.578251 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk95m\" (UniqueName: \"kubernetes.io/projected/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-kube-api-access-vk95m\") pod \"redhat-marketplace-f5wcl\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.664168 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.664440 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.164425415 +0000 UTC m=+168.071924878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.708299 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.765497 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.766088 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.266073304 +0000 UTC m=+168.173572767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.866644 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.867449 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.367429794 +0000 UTC m=+168.274929257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.891061 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f5wcl"] Oct 02 11:30:47 crc kubenswrapper[4725]: W1002 11:30:47.896141 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f8d0ae5_0192_4a4f_92ce_f93cf4dc306b.slice/crio-dfabd1856483222e07285cbdb6723eca49fe1190a5977979c410811789a855ee WatchSource:0}: Error finding container dfabd1856483222e07285cbdb6723eca49fe1190a5977979c410811789a855ee: Status 404 returned error can't find the container with id dfabd1856483222e07285cbdb6723eca49fe1190a5977979c410811789a855ee Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.968859 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:47 crc kubenswrapper[4725]: E1002 11:30:47.969274 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.469254848 +0000 UTC m=+168.376754371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.978039 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dcv4l"] Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.979160 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.981063 4725 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g6fnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 02 11:30:47 crc kubenswrapper[4725]: [+]log ok Oct 02 11:30:47 crc kubenswrapper[4725]: [+]etcd ok Oct 02 11:30:47 crc kubenswrapper[4725]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 02 11:30:47 crc kubenswrapper[4725]: [+]poststarthook/generic-apiserver-start-informers ok Oct 02 11:30:47 crc kubenswrapper[4725]: [+]poststarthook/max-in-flight-filter ok Oct 02 11:30:47 crc kubenswrapper[4725]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 02 11:30:47 crc kubenswrapper[4725]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 02 11:30:47 crc kubenswrapper[4725]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 02 11:30:47 crc kubenswrapper[4725]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Oct 02 11:30:47 crc kubenswrapper[4725]: [+]poststarthook/project.openshift.io-projectcache ok Oct 02 11:30:47 crc kubenswrapper[4725]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 02 11:30:47 crc kubenswrapper[4725]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Oct 02 11:30:47 crc kubenswrapper[4725]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 02 11:30:47 crc kubenswrapper[4725]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 02 11:30:47 crc kubenswrapper[4725]: livez check failed Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.981135 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" podUID="0d4bbd99-d18d-48d4-aa5d-0da65f7edc2f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:47 crc kubenswrapper[4725]: I1002 11:30:47.981541 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.005394 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcv4l"] Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.070255 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.070423 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.570395702 +0000 UTC m=+168.477895165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.070489 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mm4m\" (UniqueName: \"kubernetes.io/projected/0c9faeda-7818-4db8-95db-3b95a8458ee7-kube-api-access-8mm4m\") pod \"redhat-operators-dcv4l\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.070558 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.070622 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-utilities\") pod \"redhat-operators-dcv4l\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.070737 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-catalog-content\") pod \"redhat-operators-dcv4l\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.071040 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.571028209 +0000 UTC m=+168.478527672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.172540 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.172690 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.672670788 +0000 UTC m=+168.580170251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.172885 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mm4m\" (UniqueName: \"kubernetes.io/projected/0c9faeda-7818-4db8-95db-3b95a8458ee7-kube-api-access-8mm4m\") pod \"redhat-operators-dcv4l\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.172937 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.172955 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-utilities\") pod \"redhat-operators-dcv4l\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.172990 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-catalog-content\") pod \"redhat-operators-dcv4l\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.173327 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.673310896 +0000 UTC m=+168.580810359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.173388 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-catalog-content\") pod \"redhat-operators-dcv4l\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.173564 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-utilities\") pod \"redhat-operators-dcv4l\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.194439 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mm4m\" (UniqueName: \"kubernetes.io/projected/0c9faeda-7818-4db8-95db-3b95a8458ee7-kube-api-access-8mm4m\") pod \"redhat-operators-dcv4l\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.201668 4725 generic.go:334] "Generic (PLEG): container finished" podID="a448dc2a-523e-40ce-a363-fd6c1d64b700" containerID="57b2bb91b496bcf8aa3269c533a0fa1f74418bfb2ccfcbab4273e804ad73c140" exitCode=0 Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.201755 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a448dc2a-523e-40ce-a363-fd6c1d64b700","Type":"ContainerDied","Data":"57b2bb91b496bcf8aa3269c533a0fa1f74418bfb2ccfcbab4273e804ad73c140"} Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.202920 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:48 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:48 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:48 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.202961 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.203637 4725 generic.go:334] "Generic (PLEG): container finished" podID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" containerID="bb962986eb150dc5b190b1c03f87f1e7d5ee2a02e825f48f773556892e129e3d" exitCode=0 Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.203735 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kf8zm" event={"ID":"1daaf185-adc6-4e07-a8ea-a22ffd4c505a","Type":"ContainerDied","Data":"bb962986eb150dc5b190b1c03f87f1e7d5ee2a02e825f48f773556892e129e3d"} Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.205212 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5wcl" event={"ID":"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b","Type":"ContainerStarted","Data":"dfabd1856483222e07285cbdb6723eca49fe1190a5977979c410811789a855ee"} Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.206409 4725 generic.go:334] "Generic (PLEG): container finished" podID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" containerID="06d16bd881ba34546c0691f0b8a5ababe28501043ce275eea4fe219a82f4fa47" exitCode=0 Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.206457 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxc7r" event={"ID":"9c33e8b3-403f-47e9-8323-f31c5c5195d7","Type":"ContainerDied","Data":"06d16bd881ba34546c0691f0b8a5ababe28501043ce275eea4fe219a82f4fa47"} Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.207894 4725 generic.go:334] "Generic (PLEG): container finished" podID="affba198-b83b-4263-a35d-ef5a3cc41852" containerID="11b904cc8f7382880396a2ab6065e3badb29b50dd84bf9f6d15396a258dd3a47" exitCode=0 Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.207933 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wnr" event={"ID":"affba198-b83b-4263-a35d-ef5a3cc41852","Type":"ContainerDied","Data":"11b904cc8f7382880396a2ab6065e3badb29b50dd84bf9f6d15396a258dd3a47"} Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.209538 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5vpb" event={"ID":"883bf189-bd31-45eb-ac44-35fe1b065be7","Type":"ContainerStarted","Data":"a2cfccd50adf93cb276cfc6f1ce5473c2e40ac44e48b58b166d577c252798107"} Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.210767 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvqf" event={"ID":"f796b79e-6656-4260-8a9c-7ed986582af9","Type":"ContainerStarted","Data":"007807e5b7a281285311a688aaf26b4e573c08d89a35a534cf82242f13c1928f"} Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.274047 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.274331 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.774303536 +0000 UTC m=+168.681802989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.274866 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.276873 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.776854315 +0000 UTC m=+168.684353848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.376363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.376590 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.876548731 +0000 UTC m=+168.784048194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.377004 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.377375 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.877364052 +0000 UTC m=+168.784863585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.383385 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-22dkc"] Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.384672 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.398108 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.407910 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-22dkc"] Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.478298 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.478479 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.978453795 +0000 UTC m=+168.885953258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.478865 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl6m9\" (UniqueName: \"kubernetes.io/projected/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-kube-api-access-fl6m9\") pod \"redhat-operators-22dkc\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.478938 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.478959 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-catalog-content\") pod \"redhat-operators-22dkc\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.478987 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-utilities\") pod \"redhat-operators-22dkc\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.479274 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:48.979264198 +0000 UTC m=+168.886763651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.581635 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.581802 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.08178053 +0000 UTC m=+168.989279993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.581843 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-catalog-content\") pod \"redhat-operators-22dkc\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.581890 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-utilities\") pod \"redhat-operators-22dkc\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.581958 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl6m9\" (UniqueName: \"kubernetes.io/projected/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-kube-api-access-fl6m9\") pod \"redhat-operators-22dkc\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.582090 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.582345 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-catalog-content\") pod \"redhat-operators-22dkc\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.582380 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-utilities\") pod \"redhat-operators-22dkc\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.582385 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.082373097 +0000 UTC m=+168.989872560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.599629 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl6m9\" (UniqueName: \"kubernetes.io/projected/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-kube-api-access-fl6m9\") pod \"redhat-operators-22dkc\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.602462 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcv4l"] Oct 02 11:30:48 crc kubenswrapper[4725]: W1002 11:30:48.607550 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c9faeda_7818_4db8_95db_3b95a8458ee7.slice/crio-cec0a23c618a6ed3e43166173810a38487efac9cc42def082c553c253e52b36b WatchSource:0}: Error finding container cec0a23c618a6ed3e43166173810a38487efac9cc42def082c553c253e52b36b: Status 404 returned error can't find the container with id cec0a23c618a6ed3e43166173810a38487efac9cc42def082c553c253e52b36b Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.683767 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.684147 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.184130187 +0000 UTC m=+169.091629660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.705782 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.785640 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.785981 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.285970171 +0000 UTC m=+169.193469634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.886765 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.886951 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.386927381 +0000 UTC m=+169.294426844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.887282 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.887599 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.387591559 +0000 UTC m=+169.295091102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.988359 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.988572 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.488529728 +0000 UTC m=+169.396029181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:48 crc kubenswrapper[4725]: I1002 11:30:48.988626 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:48 crc kubenswrapper[4725]: E1002 11:30:48.988943 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.488930399 +0000 UTC m=+169.396429852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.090467 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.090671 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.590642719 +0000 UTC m=+169.498142182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.090859 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.091255 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.591240225 +0000 UTC m=+169.498739688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.093437 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-22dkc"] Oct 02 11:30:49 crc kubenswrapper[4725]: W1002 11:30:49.101545 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b2025e3_9e4b_4b71_8906_cc1e635f6e3a.slice/crio-780d3a0c221b2fda88354d1a3efe71d5b853b236893f14456bef4b565a58a7b5 WatchSource:0}: Error finding container 780d3a0c221b2fda88354d1a3efe71d5b853b236893f14456bef4b565a58a7b5: Status 404 returned error can't find the container with id 780d3a0c221b2fda88354d1a3efe71d5b853b236893f14456bef4b565a58a7b5 Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.192370 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.192564 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.692538344 +0000 UTC m=+169.600037807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.192627 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.192949 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.692940486 +0000 UTC m=+169.600439949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.202863 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:49 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:49 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:49 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.202921 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.216019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b38172d5-48b9-4461-aa8b-7f4384bbed88","Type":"ContainerStarted","Data":"8236b1e7befc377d784cd1022cf3c1d0ce14c747776e80a1c849e6720f946807"} Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.217339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22dkc" event={"ID":"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a","Type":"ContainerStarted","Data":"780d3a0c221b2fda88354d1a3efe71d5b853b236893f14456bef4b565a58a7b5"} Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.218230 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcv4l" event={"ID":"0c9faeda-7818-4db8-95db-3b95a8458ee7","Type":"ContainerStarted","Data":"cec0a23c618a6ed3e43166173810a38487efac9cc42def082c553c253e52b36b"} Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.219771 4725 generic.go:334] "Generic (PLEG): container finished" podID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerID="a2cfccd50adf93cb276cfc6f1ce5473c2e40ac44e48b58b166d577c252798107" exitCode=0 Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.219810 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5vpb" event={"ID":"883bf189-bd31-45eb-ac44-35fe1b065be7","Type":"ContainerDied","Data":"a2cfccd50adf93cb276cfc6f1ce5473c2e40ac44e48b58b166d577c252798107"} Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.221572 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.293647 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.293834 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.793813743 +0000 UTC m=+169.701313206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.294007 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.294283 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.794274745 +0000 UTC m=+169.701774208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.398212 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.398334 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.898316049 +0000 UTC m=+169.805815512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.398714 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.398956 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.898947886 +0000 UTC m=+169.806447349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.430576 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.499681 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a448dc2a-523e-40ce-a363-fd6c1d64b700-kubelet-dir\") pod \"a448dc2a-523e-40ce-a363-fd6c1d64b700\" (UID: \"a448dc2a-523e-40ce-a363-fd6c1d64b700\") " Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.499829 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a448dc2a-523e-40ce-a363-fd6c1d64b700-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a448dc2a-523e-40ce-a363-fd6c1d64b700" (UID: "a448dc2a-523e-40ce-a363-fd6c1d64b700"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.499878 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.499997 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:49.999975308 +0000 UTC m=+169.907474781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.500046 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a448dc2a-523e-40ce-a363-fd6c1d64b700-kube-api-access\") pod \"a448dc2a-523e-40ce-a363-fd6c1d64b700\" (UID: \"a448dc2a-523e-40ce-a363-fd6c1d64b700\") " Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.500262 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.500555 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a448dc2a-523e-40ce-a363-fd6c1d64b700-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.500614 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.000596334 +0000 UTC m=+169.908095797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.506306 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a448dc2a-523e-40ce-a363-fd6c1d64b700-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a448dc2a-523e-40ce-a363-fd6c1d64b700" (UID: "a448dc2a-523e-40ce-a363-fd6c1d64b700"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.601204 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.601411 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.101367989 +0000 UTC m=+170.008867452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.601565 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.601659 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a448dc2a-523e-40ce-a363-fd6c1d64b700-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.601908 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.101900124 +0000 UTC m=+170.009399587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.702886 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.703127 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.20309369 +0000 UTC m=+170.110593153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.703257 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.703617 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.203602713 +0000 UTC m=+170.111102186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.804676 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.804814 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.30479843 +0000 UTC m=+170.212297893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.805309 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.805678 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.305661483 +0000 UTC m=+170.213160976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.907277 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.907424 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.407398204 +0000 UTC m=+170.314897667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:49 crc kubenswrapper[4725]: I1002 11:30:49.907547 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:49 crc kubenswrapper[4725]: E1002 11:30:49.908209 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.408179026 +0000 UTC m=+170.315678529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.008375 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.008510 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.508489137 +0000 UTC m=+170.415988620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.008669 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.009035 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.509021072 +0000 UTC m=+170.416520555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.109695 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.109890 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.609861109 +0000 UTC m=+170.517360572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.109984 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.110278 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.61027061 +0000 UTC m=+170.517770073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.203391 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:50 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:50 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:50 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.203505 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.211149 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.211276 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.71125072 +0000 UTC m=+170.618750183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.211455 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.211781 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.711773513 +0000 UTC m=+170.619272976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.228376 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.232996 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a448dc2a-523e-40ce-a363-fd6c1d64b700","Type":"ContainerDied","Data":"e074b7068fc59b52a73437afddcbbcd3ff65736c7ca4c6b62e818ea5920cae8a"} Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.233059 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e074b7068fc59b52a73437afddcbbcd3ff65736c7ca4c6b62e818ea5920cae8a" Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.312279 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.312777 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.81261307 +0000 UTC m=+170.720112543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.413893 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.414426 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:50.914402863 +0000 UTC m=+170.821902316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.514823 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.515063 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.015035654 +0000 UTC m=+170.922535117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.515798 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.516181 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.016170896 +0000 UTC m=+170.923670439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.617430 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.617615 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.117570237 +0000 UTC m=+171.025069730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.617815 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.618172 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.118156883 +0000 UTC m=+171.025656396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.647636 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.652461 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-g6fnf" Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.724040 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.725619 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.22559876 +0000 UTC m=+171.133098233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.826378 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.826672 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.326658473 +0000 UTC m=+171.234157936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.927261 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.927427 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.427396917 +0000 UTC m=+171.334896390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:50 crc kubenswrapper[4725]: I1002 11:30:50.927515 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:50 crc kubenswrapper[4725]: E1002 11:30:50.927920 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.42791054 +0000 UTC m=+171.335410063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.028538 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.028702 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.528671445 +0000 UTC m=+171.436170908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.028907 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.029305 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.529291001 +0000 UTC m=+171.436790484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.130091 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.630067006 +0000 UTC m=+171.537566499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.129960 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.130552 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.130972 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.630957551 +0000 UTC m=+171.538457044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.203258 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:51 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:51 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:51 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.203323 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.232084 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.232364 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.732325841 +0000 UTC m=+171.639825344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.232587 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.232983 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.732965189 +0000 UTC m=+171.640464672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.233104 4725 generic.go:334] "Generic (PLEG): container finished" podID="0c9faeda-7818-4db8-95db-3b95a8458ee7" containerID="a4fa08e8ce7dbeffff0c79bb7f5b7235cfdf3555212ead759ae6daf49286bff5" exitCode=0 Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.233213 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcv4l" event={"ID":"0c9faeda-7818-4db8-95db-3b95a8458ee7","Type":"ContainerDied","Data":"a4fa08e8ce7dbeffff0c79bb7f5b7235cfdf3555212ead759ae6daf49286bff5"} Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.234442 4725 generic.go:334] "Generic (PLEG): container finished" podID="f796b79e-6656-4260-8a9c-7ed986582af9" containerID="7e234b8b6f75e0df996b58bc67fe935108c8063e2ae57f4515602b9531d5f8d2" exitCode=0 Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.234472 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvqf" event={"ID":"f796b79e-6656-4260-8a9c-7ed986582af9","Type":"ContainerDied","Data":"7e234b8b6f75e0df996b58bc67fe935108c8063e2ae57f4515602b9531d5f8d2"} Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.235695 4725 generic.go:334] "Generic (PLEG): container finished" podID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" containerID="e42a153bd7c83a09b6e1dfac3e7e8193069bac7faafc7b49a55679b9e33e10b2" exitCode=0 Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.235763 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5wcl" event={"ID":"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b","Type":"ContainerDied","Data":"e42a153bd7c83a09b6e1dfac3e7e8193069bac7faafc7b49a55679b9e33e10b2"} Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.276560 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.2765355 podStartE2EDuration="5.2765355s" podCreationTimestamp="2025-10-02 11:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:51.273663231 +0000 UTC m=+171.181162714" watchObservedRunningTime="2025-10-02 11:30:51.2765355 +0000 UTC m=+171.184034983" Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.333675 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.334358 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.834339929 +0000 UTC m=+171.741839392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.434822 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.435141 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:51.935127144 +0000 UTC m=+171.842626677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.536706 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.536839 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.036813414 +0000 UTC m=+171.944312877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.537004 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.537340 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.037333648 +0000 UTC m=+171.944833111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.541157 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fcfzl" Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.638884 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.639091 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.139066179 +0000 UTC m=+172.046565642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.640571 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.641053 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.141033503 +0000 UTC m=+172.048532966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.741356 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.741608 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.241577981 +0000 UTC m=+172.149077454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.741990 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.742316 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.242303941 +0000 UTC m=+172.149803404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.843108 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.843692 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.343669152 +0000 UTC m=+172.251168625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:51 crc kubenswrapper[4725]: I1002 11:30:51.947394 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:51 crc kubenswrapper[4725]: E1002 11:30:51.947791 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.447774887 +0000 UTC m=+172.355274350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:52 crc kubenswrapper[4725]: E1002 11:30:52.048833 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.548752907 +0000 UTC m=+172.456252370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.048557 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.049219 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:52 crc kubenswrapper[4725]: E1002 11:30:52.049603 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.549530499 +0000 UTC m=+172.457029962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.150079 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:52 crc kubenswrapper[4725]: E1002 11:30:52.150570 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.65055016 +0000 UTC m=+172.558049633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.150660 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.150688 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:52 crc kubenswrapper[4725]: E1002 11:30:52.151047 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.651036963 +0000 UTC m=+172.558536426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.159430 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6af8c70-d2e8-4891-bf65-1deb3fb02044-metrics-certs\") pod \"network-metrics-daemon-zxhp4\" (UID: \"a6af8c70-d2e8-4891-bf65-1deb3fb02044\") " pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.204443 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:52 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:52 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:52 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.204535 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.250463 4725 generic.go:334] "Generic (PLEG): container finished" podID="b38172d5-48b9-4461-aa8b-7f4384bbed88" containerID="8236b1e7befc377d784cd1022cf3c1d0ce14c747776e80a1c849e6720f946807" exitCode=0 Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.250540 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b38172d5-48b9-4461-aa8b-7f4384bbed88","Type":"ContainerDied","Data":"8236b1e7befc377d784cd1022cf3c1d0ce14c747776e80a1c849e6720f946807"} Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.251199 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:52 crc kubenswrapper[4725]: E1002 11:30:52.251573 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.75155361 +0000 UTC m=+172.659053073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.251642 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:52 crc kubenswrapper[4725]: E1002 11:30:52.251965 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.751954312 +0000 UTC m=+172.659453785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.253204 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" event={"ID":"e6e25413-d1ac-4132-80bd-2119f36405cb","Type":"ContainerStarted","Data":"15e90d2cc230951ef8bfa7bffff007c7730398f31448070b4007d578164b19fc"} Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.255852 4725 generic.go:334] "Generic (PLEG): container finished" podID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerID="d2804e7fb022eb9658fc093fc55d67cb05041982a4f957d54f4e67cadf0aa9a7" exitCode=0 Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.255943 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22dkc" event={"ID":"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a","Type":"ContainerDied","Data":"d2804e7fb022eb9658fc093fc55d67cb05041982a4f957d54f4e67cadf0aa9a7"} Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.299817 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zxhp4" Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.352267 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:52 crc kubenswrapper[4725]: E1002 11:30:52.353103 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.853084906 +0000 UTC m=+172.760584369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.388257 4725 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.453596 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:52 crc kubenswrapper[4725]: E1002 11:30:52.454001 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:52.953984984 +0000 UTC m=+172.861484447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.555640 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:52 crc kubenswrapper[4725]: E1002 11:30:52.556216 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 11:30:53.056193018 +0000 UTC m=+172.963692491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.657864 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:52 crc kubenswrapper[4725]: E1002 11:30:52.658209 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 11:30:53.158196196 +0000 UTC m=+173.065695659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99xdm" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.738839 4725 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-02T11:30:52.388278918Z","Handler":null,"Name":""} Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.749847 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zxhp4"] Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.753304 4725 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.753331 4725 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.765750 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.776014 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.867833 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.873616 4725 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.873661 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:52 crc kubenswrapper[4725]: I1002 11:30:52.901138 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99xdm\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.021679 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.203155 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:53 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:53 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:53 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.203426 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.244510 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99xdm"] Oct 02 11:30:53 crc kubenswrapper[4725]: W1002 11:30:53.261797 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad9fc733_da08_4b08_a234_f85424ed53cb.slice/crio-d1d8489f0174bbc5cad13e62a224fac76c52653e4e02843d60fc9ac1d50d4daf WatchSource:0}: Error finding container d1d8489f0174bbc5cad13e62a224fac76c52653e4e02843d60fc9ac1d50d4daf: Status 404 returned error can't find the container with id d1d8489f0174bbc5cad13e62a224fac76c52653e4e02843d60fc9ac1d50d4daf Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.279143 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.280598 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" event={"ID":"a6af8c70-d2e8-4891-bf65-1deb3fb02044","Type":"ContainerStarted","Data":"52bbe686fa6938e4f7e0a97125184b4ba22467c62c933c01f2c27d772787b6c0"} Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.280649 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" event={"ID":"a6af8c70-d2e8-4891-bf65-1deb3fb02044","Type":"ContainerStarted","Data":"d6928356fa97877b527d64ffc232e3ff08a1a3f030f164739b625095c91cb4e7"} Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.306073 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" event={"ID":"e6e25413-d1ac-4132-80bd-2119f36405cb","Type":"ContainerStarted","Data":"07f4df045e32d7fec5b49e184b3e86f0907779023df2542de0580283091e2374"} Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.306144 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" event={"ID":"e6e25413-d1ac-4132-80bd-2119f36405cb","Type":"ContainerStarted","Data":"cfc0bb9cb54ec49127690674f0502def7ac2a2fb5ba607581b6d66b780b50858"} Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.326323 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tmvx6" podStartSLOduration=20.326302087 podStartE2EDuration="20.326302087s" podCreationTimestamp="2025-10-02 11:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:53.324305552 +0000 UTC m=+173.231805005" watchObservedRunningTime="2025-10-02 11:30:53.326302087 +0000 UTC m=+173.233801550" Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.587518 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.679255 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38172d5-48b9-4461-aa8b-7f4384bbed88-kube-api-access\") pod \"b38172d5-48b9-4461-aa8b-7f4384bbed88\" (UID: \"b38172d5-48b9-4461-aa8b-7f4384bbed88\") " Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.679305 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38172d5-48b9-4461-aa8b-7f4384bbed88-kubelet-dir\") pod \"b38172d5-48b9-4461-aa8b-7f4384bbed88\" (UID: \"b38172d5-48b9-4461-aa8b-7f4384bbed88\") " Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.679559 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b38172d5-48b9-4461-aa8b-7f4384bbed88-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b38172d5-48b9-4461-aa8b-7f4384bbed88" (UID: "b38172d5-48b9-4461-aa8b-7f4384bbed88"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.686439 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38172d5-48b9-4461-aa8b-7f4384bbed88-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b38172d5-48b9-4461-aa8b-7f4384bbed88" (UID: "b38172d5-48b9-4461-aa8b-7f4384bbed88"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.780260 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38172d5-48b9-4461-aa8b-7f4384bbed88-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 11:30:53 crc kubenswrapper[4725]: I1002 11:30:53.780302 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38172d5-48b9-4461-aa8b-7f4384bbed88-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.203108 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:54 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:54 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:54 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.203187 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.344392 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" event={"ID":"ad9fc733-da08-4b08-a234-f85424ed53cb","Type":"ContainerStarted","Data":"84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57"} Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.348396 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.348416 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" event={"ID":"ad9fc733-da08-4b08-a234-f85424ed53cb","Type":"ContainerStarted","Data":"d1d8489f0174bbc5cad13e62a224fac76c52653e4e02843d60fc9ac1d50d4daf"} Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.349930 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b38172d5-48b9-4461-aa8b-7f4384bbed88","Type":"ContainerDied","Data":"dd590b953bbaa9180e5037a09584ff2b084d6387f0d5e82b2c9490a5ba40562e"} Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.349986 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd590b953bbaa9180e5037a09584ff2b084d6387f0d5e82b2c9490a5ba40562e" Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.349977 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.356461 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zxhp4" event={"ID":"a6af8c70-d2e8-4891-bf65-1deb3fb02044","Type":"ContainerStarted","Data":"40fad449876b624f8330f9d04f71b1446622b2f21d00a9fa79f05a3a59a46c2d"} Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.371120 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" podStartSLOduration=144.371100265 podStartE2EDuration="2m24.371100265s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:54.366963582 +0000 UTC m=+174.274463045" watchObservedRunningTime="2025-10-02 11:30:54.371100265 +0000 UTC m=+174.278599728" Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.386646 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-zxhp4" podStartSLOduration=144.38662962 podStartE2EDuration="2m24.38662962s" podCreationTimestamp="2025-10-02 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:30:54.383647128 +0000 UTC m=+174.291146581" watchObservedRunningTime="2025-10-02 11:30:54.38662962 +0000 UTC m=+174.294129083" Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.405630 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-vdrxk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.405688 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vdrxk" podUID="ac6f43e5-03b3-49a8-9e46-7c607c06f40c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.408013 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-vdrxk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Oct 02 11:30:54 crc kubenswrapper[4725]: I1002 11:30:54.408070 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vdrxk" podUID="ac6f43e5-03b3-49a8-9e46-7c607c06f40c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Oct 02 11:30:55 crc kubenswrapper[4725]: I1002 11:30:55.203714 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:55 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:55 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:55 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:55 crc kubenswrapper[4725]: I1002 11:30:55.203802 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:55 crc kubenswrapper[4725]: I1002 11:30:55.948416 4725 patch_prober.go:28] interesting pod/console-f9d7485db-kq4vt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Oct 02 11:30:55 crc kubenswrapper[4725]: I1002 11:30:55.948498 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kq4vt" podUID="a7e4bee1-a3eb-4e37-bfcb-99350ce66859" containerName="console" probeResult="failure" output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" Oct 02 11:30:56 crc kubenswrapper[4725]: I1002 11:30:56.202456 4725 patch_prober.go:28] interesting pod/router-default-5444994796-dxw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 11:30:56 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 02 11:30:56 crc kubenswrapper[4725]: [+]process-running ok Oct 02 11:30:56 crc kubenswrapper[4725]: healthz check failed Oct 02 11:30:56 crc kubenswrapper[4725]: I1002 11:30:56.202524 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dxw2d" podUID="f1dc1f09-4a5a-43fb-8bf5-c0fdf3cf7732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:30:57 crc kubenswrapper[4725]: I1002 11:30:57.202738 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:30:57 crc kubenswrapper[4725]: I1002 11:30:57.205524 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dxw2d" Oct 02 11:31:04 crc kubenswrapper[4725]: I1002 11:31:04.428959 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-vdrxk" Oct 02 11:31:05 crc kubenswrapper[4725]: I1002 11:31:05.952590 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:31:05 crc kubenswrapper[4725]: I1002 11:31:05.957107 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:31:13 crc kubenswrapper[4725]: I1002 11:31:13.027164 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:31:14 crc kubenswrapper[4725]: I1002 11:31:14.978477 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:31:14 crc kubenswrapper[4725]: I1002 11:31:14.978865 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:31:16 crc kubenswrapper[4725]: I1002 11:31:16.200311 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nkzc6" Oct 02 11:31:17 crc kubenswrapper[4725]: I1002 11:31:17.304304 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 11:31:28 crc kubenswrapper[4725]: E1002 11:31:28.863422 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 02 11:31:28 crc kubenswrapper[4725]: E1002 11:31:28.864079 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jzw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kf8zm_openshift-marketplace(1daaf185-adc6-4e07-a8ea-a22ffd4c505a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 11:31:28 crc kubenswrapper[4725]: E1002 11:31:28.865306 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kf8zm" podUID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" Oct 02 11:31:31 crc kubenswrapper[4725]: E1002 11:31:31.085197 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 02 11:31:31 crc kubenswrapper[4725]: E1002 11:31:31.085371 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mm4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcv4l_openshift-marketplace(0c9faeda-7818-4db8-95db-3b95a8458ee7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 11:31:31 crc kubenswrapper[4725]: E1002 11:31:31.086543 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dcv4l" podUID="0c9faeda-7818-4db8-95db-3b95a8458ee7" Oct 02 11:31:44 crc kubenswrapper[4725]: I1002 11:31:44.978765 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:31:44 crc kubenswrapper[4725]: I1002 11:31:44.979347 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:31:44 crc kubenswrapper[4725]: I1002 11:31:44.979397 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:31:44 crc kubenswrapper[4725]: I1002 11:31:44.979932 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 11:31:44 crc kubenswrapper[4725]: I1002 11:31:44.980085 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2" gracePeriod=600 Oct 02 11:31:50 crc kubenswrapper[4725]: I1002 11:31:50.745638 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2" exitCode=0 Oct 02 11:31:50 crc kubenswrapper[4725]: I1002 11:31:50.745774 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2"} Oct 02 11:31:54 crc kubenswrapper[4725]: E1002 11:31:54.377229 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 02 11:31:54 crc kubenswrapper[4725]: E1002 11:31:54.377686 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvbws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zxc7r_openshift-marketplace(9c33e8b3-403f-47e9-8323-f31c5c5195d7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 11:31:54 crc kubenswrapper[4725]: E1002 11:31:54.379445 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zxc7r" podUID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" Oct 02 11:31:54 crc kubenswrapper[4725]: E1002 11:31:54.667289 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 02 11:31:54 crc kubenswrapper[4725]: E1002 11:31:54.667458 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbqs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l2wnr_openshift-marketplace(affba198-b83b-4263-a35d-ef5a3cc41852): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 11:31:54 crc kubenswrapper[4725]: E1002 11:31:54.668632 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-l2wnr" podUID="affba198-b83b-4263-a35d-ef5a3cc41852" Oct 02 11:31:57 crc kubenswrapper[4725]: E1002 11:31:57.078016 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zxc7r" podUID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" Oct 02 11:31:57 crc kubenswrapper[4725]: E1002 11:31:57.667163 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 02 11:31:57 crc kubenswrapper[4725]: E1002 11:31:57.667552 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl6m9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-22dkc_openshift-marketplace(6b2025e3-9e4b-4b71-8906-cc1e635f6e3a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 11:31:57 crc kubenswrapper[4725]: E1002 11:31:57.668894 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-22dkc" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" Oct 02 11:31:58 crc kubenswrapper[4725]: E1002 11:31:58.787374 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-22dkc" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" Oct 02 11:31:59 crc kubenswrapper[4725]: E1002 11:31:59.297090 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 02 11:31:59 crc kubenswrapper[4725]: E1002 11:31:59.297578 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfn79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-btvqf_openshift-marketplace(f796b79e-6656-4260-8a9c-7ed986582af9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 11:31:59 crc kubenswrapper[4725]: E1002 11:31:59.298919 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-btvqf" podUID="f796b79e-6656-4260-8a9c-7ed986582af9" Oct 02 11:31:59 crc kubenswrapper[4725]: E1002 11:31:59.677901 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 02 11:31:59 crc kubenswrapper[4725]: E1002 11:31:59.678279 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vk95m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-f5wcl_openshift-marketplace(8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 11:31:59 crc kubenswrapper[4725]: E1002 11:31:59.680086 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-f5wcl" podUID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" Oct 02 11:31:59 crc kubenswrapper[4725]: I1002 11:31:59.791055 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"7190a4574cd538805fe0f0cfce38011b273a9f7cd55ebf4c6d774d8588b6c6e7"} Oct 02 11:31:59 crc kubenswrapper[4725]: I1002 11:31:59.794074 4725 generic.go:334] "Generic (PLEG): container finished" podID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerID="06c9be8f516f9a9e018d003a176c16e7c46ed41c940fd8b0049802e7d3dccd52" exitCode=0 Oct 02 11:31:59 crc kubenswrapper[4725]: I1002 11:31:59.794110 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5vpb" event={"ID":"883bf189-bd31-45eb-ac44-35fe1b065be7","Type":"ContainerDied","Data":"06c9be8f516f9a9e018d003a176c16e7c46ed41c940fd8b0049802e7d3dccd52"} Oct 02 11:31:59 crc kubenswrapper[4725]: E1002 11:31:59.796635 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btvqf" podUID="f796b79e-6656-4260-8a9c-7ed986582af9" Oct 02 11:32:03 crc kubenswrapper[4725]: I1002 11:32:03.826064 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5vpb" event={"ID":"883bf189-bd31-45eb-ac44-35fe1b065be7","Type":"ContainerStarted","Data":"5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f"} Oct 02 11:32:03 crc kubenswrapper[4725]: I1002 11:32:03.828329 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kf8zm" event={"ID":"1daaf185-adc6-4e07-a8ea-a22ffd4c505a","Type":"ContainerStarted","Data":"f099f49d63e1a6770107e0a624726da0ed138cd9175a91f571ee6f4060de5796"} Oct 02 11:32:03 crc kubenswrapper[4725]: I1002 11:32:03.830132 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcv4l" event={"ID":"0c9faeda-7818-4db8-95db-3b95a8458ee7","Type":"ContainerStarted","Data":"534dbbfab4610491e030b5aebfdc439eb6d2a789ed84b3df02c47a8d95140452"} Oct 02 11:32:04 crc kubenswrapper[4725]: I1002 11:32:04.836473 4725 generic.go:334] "Generic (PLEG): container finished" podID="0c9faeda-7818-4db8-95db-3b95a8458ee7" containerID="534dbbfab4610491e030b5aebfdc439eb6d2a789ed84b3df02c47a8d95140452" exitCode=0 Oct 02 11:32:04 crc kubenswrapper[4725]: I1002 11:32:04.836585 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcv4l" event={"ID":"0c9faeda-7818-4db8-95db-3b95a8458ee7","Type":"ContainerDied","Data":"534dbbfab4610491e030b5aebfdc439eb6d2a789ed84b3df02c47a8d95140452"} Oct 02 11:32:04 crc kubenswrapper[4725]: I1002 11:32:04.840092 4725 generic.go:334] "Generic (PLEG): container finished" podID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" containerID="f099f49d63e1a6770107e0a624726da0ed138cd9175a91f571ee6f4060de5796" exitCode=0 Oct 02 11:32:04 crc kubenswrapper[4725]: I1002 11:32:04.840382 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kf8zm" event={"ID":"1daaf185-adc6-4e07-a8ea-a22ffd4c505a","Type":"ContainerDied","Data":"f099f49d63e1a6770107e0a624726da0ed138cd9175a91f571ee6f4060de5796"} Oct 02 11:32:04 crc kubenswrapper[4725]: I1002 11:32:04.879904 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h5vpb" podStartSLOduration=6.606743631 podStartE2EDuration="1m19.879888689s" podCreationTimestamp="2025-10-02 11:30:45 +0000 UTC" firstStartedPulling="2025-10-02 11:30:50.232999325 +0000 UTC m=+170.140498818" lastFinishedPulling="2025-10-02 11:32:03.506144413 +0000 UTC m=+243.413643876" observedRunningTime="2025-10-02 11:32:04.877779772 +0000 UTC m=+244.785279245" watchObservedRunningTime="2025-10-02 11:32:04.879888689 +0000 UTC m=+244.787388152" Oct 02 11:32:05 crc kubenswrapper[4725]: I1002 11:32:05.547374 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:32:05 crc kubenswrapper[4725]: I1002 11:32:05.547460 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:32:07 crc kubenswrapper[4725]: I1002 11:32:07.177169 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-h5vpb" podUID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerName="registry-server" probeResult="failure" output=< Oct 02 11:32:07 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Oct 02 11:32:07 crc kubenswrapper[4725]: > Oct 02 11:32:15 crc kubenswrapper[4725]: I1002 11:32:15.960207 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:32:16 crc kubenswrapper[4725]: I1002 11:32:16.020704 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:32:16 crc kubenswrapper[4725]: I1002 11:32:16.705338 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h5vpb"] Oct 02 11:32:17 crc kubenswrapper[4725]: I1002 11:32:17.923130 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h5vpb" podUID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerName="registry-server" containerID="cri-o://5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f" gracePeriod=2 Oct 02 11:32:19 crc kubenswrapper[4725]: I1002 11:32:19.939070 4725 generic.go:334] "Generic (PLEG): container finished" podID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerID="5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f" exitCode=0 Oct 02 11:32:19 crc kubenswrapper[4725]: I1002 11:32:19.939322 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5vpb" event={"ID":"883bf189-bd31-45eb-ac44-35fe1b065be7","Type":"ContainerDied","Data":"5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f"} Oct 02 11:32:25 crc kubenswrapper[4725]: E1002 11:32:25.548586 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f is running failed: container process not found" containerID="5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f" cmd=["grpc_health_probe","-addr=:50051"] Oct 02 11:32:25 crc kubenswrapper[4725]: E1002 11:32:25.549343 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f is running failed: container process not found" containerID="5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f" cmd=["grpc_health_probe","-addr=:50051"] Oct 02 11:32:25 crc kubenswrapper[4725]: E1002 11:32:25.549932 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f is running failed: container process not found" containerID="5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f" cmd=["grpc_health_probe","-addr=:50051"] Oct 02 11:32:25 crc kubenswrapper[4725]: E1002 11:32:25.549965 4725 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-h5vpb" podUID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerName="registry-server" Oct 02 11:32:31 crc kubenswrapper[4725]: I1002 11:32:31.152060 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:32:31 crc kubenswrapper[4725]: I1002 11:32:31.274139 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-utilities\") pod \"883bf189-bd31-45eb-ac44-35fe1b065be7\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " Oct 02 11:32:31 crc kubenswrapper[4725]: I1002 11:32:31.274186 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-catalog-content\") pod \"883bf189-bd31-45eb-ac44-35fe1b065be7\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " Oct 02 11:32:31 crc kubenswrapper[4725]: I1002 11:32:31.274232 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwgmj\" (UniqueName: \"kubernetes.io/projected/883bf189-bd31-45eb-ac44-35fe1b065be7-kube-api-access-dwgmj\") pod \"883bf189-bd31-45eb-ac44-35fe1b065be7\" (UID: \"883bf189-bd31-45eb-ac44-35fe1b065be7\") " Oct 02 11:32:31 crc kubenswrapper[4725]: I1002 11:32:31.275409 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-utilities" (OuterVolumeSpecName: "utilities") pod "883bf189-bd31-45eb-ac44-35fe1b065be7" (UID: "883bf189-bd31-45eb-ac44-35fe1b065be7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:32:31 crc kubenswrapper[4725]: I1002 11:32:31.286104 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883bf189-bd31-45eb-ac44-35fe1b065be7-kube-api-access-dwgmj" (OuterVolumeSpecName: "kube-api-access-dwgmj") pod "883bf189-bd31-45eb-ac44-35fe1b065be7" (UID: "883bf189-bd31-45eb-ac44-35fe1b065be7"). InnerVolumeSpecName "kube-api-access-dwgmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:32:31 crc kubenswrapper[4725]: I1002 11:32:31.317015 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "883bf189-bd31-45eb-ac44-35fe1b065be7" (UID: "883bf189-bd31-45eb-ac44-35fe1b065be7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:32:31 crc kubenswrapper[4725]: I1002 11:32:31.375487 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:32:31 crc kubenswrapper[4725]: I1002 11:32:31.375522 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883bf189-bd31-45eb-ac44-35fe1b065be7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:32:31 crc kubenswrapper[4725]: I1002 11:32:31.375536 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwgmj\" (UniqueName: \"kubernetes.io/projected/883bf189-bd31-45eb-ac44-35fe1b065be7-kube-api-access-dwgmj\") on node \"crc\" DevicePath \"\"" Oct 02 11:32:32 crc kubenswrapper[4725]: I1002 11:32:32.008430 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5vpb" event={"ID":"883bf189-bd31-45eb-ac44-35fe1b065be7","Type":"ContainerDied","Data":"e1f8d3adac9b52e130202dc6e42f6f02fefdafd31596d6dc149023cafbaf1902"} Oct 02 11:32:32 crc kubenswrapper[4725]: I1002 11:32:32.008500 4725 scope.go:117] "RemoveContainer" containerID="5fba60557d943d06431360eea67e7842abbc8a821b7583cab0ae50de096c5e2f" Oct 02 11:32:32 crc kubenswrapper[4725]: I1002 11:32:32.008516 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5vpb" Oct 02 11:32:32 crc kubenswrapper[4725]: I1002 11:32:32.038241 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h5vpb"] Oct 02 11:32:32 crc kubenswrapper[4725]: I1002 11:32:32.039954 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h5vpb"] Oct 02 11:32:33 crc kubenswrapper[4725]: I1002 11:32:33.278779 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883bf189-bd31-45eb-ac44-35fe1b065be7" path="/var/lib/kubelet/pods/883bf189-bd31-45eb-ac44-35fe1b065be7/volumes" Oct 02 11:32:34 crc kubenswrapper[4725]: I1002 11:32:34.792436 4725 scope.go:117] "RemoveContainer" containerID="06c9be8f516f9a9e018d003a176c16e7c46ed41c940fd8b0049802e7d3dccd52" Oct 02 11:32:39 crc kubenswrapper[4725]: I1002 11:32:39.144796 4725 scope.go:117] "RemoveContainer" containerID="a2cfccd50adf93cb276cfc6f1ce5473c2e40ac44e48b58b166d577c252798107" Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.057046 4725 generic.go:334] "Generic (PLEG): container finished" podID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" containerID="6e81a920a6abe1de1b9be74dd8c0f92b5e3d8aef52fe658c07e0241d7f8458e5" exitCode=0 Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.057134 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxc7r" event={"ID":"9c33e8b3-403f-47e9-8323-f31c5c5195d7","Type":"ContainerDied","Data":"6e81a920a6abe1de1b9be74dd8c0f92b5e3d8aef52fe658c07e0241d7f8458e5"} Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.059968 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22dkc" event={"ID":"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a","Type":"ContainerStarted","Data":"90ad44b59d5a810202ef1934408d35e126c11fbe8b59ee4a2e3ad5a6ed389420"} Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.063262 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcv4l" event={"ID":"0c9faeda-7818-4db8-95db-3b95a8458ee7","Type":"ContainerStarted","Data":"1284c551828d47f960172a7c03429ad6da1ceb693d92fffe9e6dcd3769a8df3e"} Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.065678 4725 generic.go:334] "Generic (PLEG): container finished" podID="f796b79e-6656-4260-8a9c-7ed986582af9" containerID="9d49b36a5316696d99cb95c9214908a1c6490d208223ec4fde0a529b0624cf4c" exitCode=0 Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.065732 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvqf" event={"ID":"f796b79e-6656-4260-8a9c-7ed986582af9","Type":"ContainerDied","Data":"9d49b36a5316696d99cb95c9214908a1c6490d208223ec4fde0a529b0624cf4c"} Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.068665 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kf8zm" event={"ID":"1daaf185-adc6-4e07-a8ea-a22ffd4c505a","Type":"ContainerStarted","Data":"25e0a078e82c47fbc14ade08201ae7ceb99eecb756c784015dac8b8cdbd881eb"} Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.070698 4725 generic.go:334] "Generic (PLEG): container finished" podID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" containerID="b31fac5f44b4162e96c66e5470ca22c93f012780a6fa36e8b70b7d6bbc60c795" exitCode=0 Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.070753 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5wcl" event={"ID":"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b","Type":"ContainerDied","Data":"b31fac5f44b4162e96c66e5470ca22c93f012780a6fa36e8b70b7d6bbc60c795"} Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.075664 4725 generic.go:334] "Generic (PLEG): container finished" podID="affba198-b83b-4263-a35d-ef5a3cc41852" containerID="ea3c468a063c3840c70b45573a451071f30a6716291b5481bf249264e7a5671a" exitCode=0 Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.075711 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wnr" event={"ID":"affba198-b83b-4263-a35d-ef5a3cc41852","Type":"ContainerDied","Data":"ea3c468a063c3840c70b45573a451071f30a6716291b5481bf249264e7a5671a"} Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.125288 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dcv4l" podStartSLOduration=12.583184165 podStartE2EDuration="1m54.125268779s" podCreationTimestamp="2025-10-02 11:30:47 +0000 UTC" firstStartedPulling="2025-10-02 11:30:52.257540325 +0000 UTC m=+172.165039788" lastFinishedPulling="2025-10-02 11:32:33.799624939 +0000 UTC m=+273.707124402" observedRunningTime="2025-10-02 11:32:41.1234901 +0000 UTC m=+281.030989573" watchObservedRunningTime="2025-10-02 11:32:41.125268779 +0000 UTC m=+281.032768242" Oct 02 11:32:41 crc kubenswrapper[4725]: I1002 11:32:41.151608 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kf8zm" podStartSLOduration=6.144810543 podStartE2EDuration="1m57.15158277s" podCreationTimestamp="2025-10-02 11:30:44 +0000 UTC" firstStartedPulling="2025-10-02 11:30:49.22126699 +0000 UTC m=+169.128766453" lastFinishedPulling="2025-10-02 11:32:40.228039217 +0000 UTC m=+280.135538680" observedRunningTime="2025-10-02 11:32:41.146298895 +0000 UTC m=+281.053798368" watchObservedRunningTime="2025-10-02 11:32:41.15158277 +0000 UTC m=+281.059082233" Oct 02 11:32:42 crc kubenswrapper[4725]: I1002 11:32:42.084664 4725 generic.go:334] "Generic (PLEG): container finished" podID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerID="90ad44b59d5a810202ef1934408d35e126c11fbe8b59ee4a2e3ad5a6ed389420" exitCode=0 Oct 02 11:32:42 crc kubenswrapper[4725]: I1002 11:32:42.084770 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22dkc" event={"ID":"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a","Type":"ContainerDied","Data":"90ad44b59d5a810202ef1934408d35e126c11fbe8b59ee4a2e3ad5a6ed389420"} Oct 02 11:32:42 crc kubenswrapper[4725]: I1002 11:32:42.087887 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5wcl" event={"ID":"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b","Type":"ContainerStarted","Data":"727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233"} Oct 02 11:32:43 crc kubenswrapper[4725]: I1002 11:32:43.095317 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxc7r" event={"ID":"9c33e8b3-403f-47e9-8323-f31c5c5195d7","Type":"ContainerStarted","Data":"0d813273c0504b97f2425fa654969ff189741a39fa71571b739ef256659a3db7"} Oct 02 11:32:43 crc kubenswrapper[4725]: I1002 11:32:43.111616 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f5wcl" podStartSLOduration=6.5255876950000005 podStartE2EDuration="1m56.111596179s" podCreationTimestamp="2025-10-02 11:30:47 +0000 UTC" firstStartedPulling="2025-10-02 11:30:52.258274824 +0000 UTC m=+172.165774287" lastFinishedPulling="2025-10-02 11:32:41.844283308 +0000 UTC m=+281.751782771" observedRunningTime="2025-10-02 11:32:43.108587596 +0000 UTC m=+283.016087069" watchObservedRunningTime="2025-10-02 11:32:43.111596179 +0000 UTC m=+283.019095652" Oct 02 11:32:43 crc kubenswrapper[4725]: I1002 11:32:43.124619 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zxc7r" podStartSLOduration=7.048967499 podStartE2EDuration="1m59.124604375s" podCreationTimestamp="2025-10-02 11:30:44 +0000 UTC" firstStartedPulling="2025-10-02 11:30:50.235025519 +0000 UTC m=+170.142524982" lastFinishedPulling="2025-10-02 11:32:42.310662385 +0000 UTC m=+282.218161858" observedRunningTime="2025-10-02 11:32:43.12187034 +0000 UTC m=+283.029369813" watchObservedRunningTime="2025-10-02 11:32:43.124604375 +0000 UTC m=+283.032103838" Oct 02 11:32:44 crc kubenswrapper[4725]: I1002 11:32:44.103429 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wnr" event={"ID":"affba198-b83b-4263-a35d-ef5a3cc41852","Type":"ContainerStarted","Data":"786448a795b5a4baaba19bacf4ccbe836c12ac03603c10998dfe7d06e724e96f"} Oct 02 11:32:44 crc kubenswrapper[4725]: I1002 11:32:44.105945 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvqf" event={"ID":"f796b79e-6656-4260-8a9c-7ed986582af9","Type":"ContainerStarted","Data":"fb97959edacd9888ff39cf27c7e790c2a6c209b5b80f613e498555b3b72b0a8e"} Oct 02 11:32:44 crc kubenswrapper[4725]: I1002 11:32:44.122012 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l2wnr" podStartSLOduration=6.813209374 podStartE2EDuration="1m59.12199762s" podCreationTimestamp="2025-10-02 11:30:45 +0000 UTC" firstStartedPulling="2025-10-02 11:30:51.238082998 +0000 UTC m=+171.145582471" lastFinishedPulling="2025-10-02 11:32:43.546871254 +0000 UTC m=+283.454370717" observedRunningTime="2025-10-02 11:32:44.119137632 +0000 UTC m=+284.026637095" watchObservedRunningTime="2025-10-02 11:32:44.12199762 +0000 UTC m=+284.029497083" Oct 02 11:32:44 crc kubenswrapper[4725]: I1002 11:32:44.141623 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-btvqf" podStartSLOduration=7.486907626 podStartE2EDuration="1m58.141603007s" podCreationTimestamp="2025-10-02 11:30:46 +0000 UTC" firstStartedPulling="2025-10-02 11:30:52.258938973 +0000 UTC m=+172.166438436" lastFinishedPulling="2025-10-02 11:32:42.913634354 +0000 UTC m=+282.821133817" observedRunningTime="2025-10-02 11:32:44.138592395 +0000 UTC m=+284.046091858" watchObservedRunningTime="2025-10-02 11:32:44.141603007 +0000 UTC m=+284.049102470" Oct 02 11:32:45 crc kubenswrapper[4725]: I1002 11:32:45.113639 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22dkc" event={"ID":"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a","Type":"ContainerStarted","Data":"05de864d1314b6293bbff83517fbccd94fd619ad5cedce6f4b907f1192fc9e5d"} Oct 02 11:32:45 crc kubenswrapper[4725]: I1002 11:32:45.171664 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:32:45 crc kubenswrapper[4725]: I1002 11:32:45.172799 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:32:45 crc kubenswrapper[4725]: I1002 11:32:45.217765 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:32:45 crc kubenswrapper[4725]: I1002 11:32:45.241017 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-22dkc" podStartSLOduration=5.33380372 podStartE2EDuration="1m57.241001118s" podCreationTimestamp="2025-10-02 11:30:48 +0000 UTC" firstStartedPulling="2025-10-02 11:30:52.258572202 +0000 UTC m=+172.166071665" lastFinishedPulling="2025-10-02 11:32:44.16576959 +0000 UTC m=+284.073269063" observedRunningTime="2025-10-02 11:32:45.136546396 +0000 UTC m=+285.044045859" watchObservedRunningTime="2025-10-02 11:32:45.241001118 +0000 UTC m=+285.148500571" Oct 02 11:32:45 crc kubenswrapper[4725]: I1002 11:32:45.331927 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:32:45 crc kubenswrapper[4725]: I1002 11:32:45.331976 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:32:45 crc kubenswrapper[4725]: I1002 11:32:45.374067 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:32:45 crc kubenswrapper[4725]: I1002 11:32:45.709922 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:32:45 crc kubenswrapper[4725]: I1002 11:32:45.709971 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:32:46 crc kubenswrapper[4725]: I1002 11:32:46.164128 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:32:46 crc kubenswrapper[4725]: I1002 11:32:46.756461 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-l2wnr" podUID="affba198-b83b-4263-a35d-ef5a3cc41852" containerName="registry-server" probeResult="failure" output=< Oct 02 11:32:46 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Oct 02 11:32:46 crc kubenswrapper[4725]: > Oct 02 11:32:47 crc kubenswrapper[4725]: I1002 11:32:47.343974 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:32:47 crc kubenswrapper[4725]: I1002 11:32:47.344027 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:32:47 crc kubenswrapper[4725]: I1002 11:32:47.405530 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:32:47 crc kubenswrapper[4725]: I1002 11:32:47.708382 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:32:47 crc kubenswrapper[4725]: I1002 11:32:47.708937 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:32:47 crc kubenswrapper[4725]: I1002 11:32:47.770215 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:32:48 crc kubenswrapper[4725]: I1002 11:32:48.168206 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:32:48 crc kubenswrapper[4725]: I1002 11:32:48.168558 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:32:48 crc kubenswrapper[4725]: I1002 11:32:48.399087 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:32:48 crc kubenswrapper[4725]: I1002 11:32:48.399155 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:32:48 crc kubenswrapper[4725]: I1002 11:32:48.435009 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:32:48 crc kubenswrapper[4725]: I1002 11:32:48.706007 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:32:48 crc kubenswrapper[4725]: I1002 11:32:48.706292 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:32:49 crc kubenswrapper[4725]: I1002 11:32:49.188322 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:32:49 crc kubenswrapper[4725]: I1002 11:32:49.754916 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-22dkc" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerName="registry-server" probeResult="failure" output=< Oct 02 11:32:49 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Oct 02 11:32:49 crc kubenswrapper[4725]: > Oct 02 11:32:49 crc kubenswrapper[4725]: I1002 11:32:49.890361 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f5wcl"] Oct 02 11:32:51 crc kubenswrapper[4725]: I1002 11:32:51.153154 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f5wcl" podUID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" containerName="registry-server" containerID="cri-o://727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233" gracePeriod=2 Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.027564 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.159798 4725 generic.go:334] "Generic (PLEG): container finished" podID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" containerID="727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233" exitCode=0 Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.159870 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f5wcl" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.159907 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5wcl" event={"ID":"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b","Type":"ContainerDied","Data":"727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233"} Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.161112 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5wcl" event={"ID":"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b","Type":"ContainerDied","Data":"dfabd1856483222e07285cbdb6723eca49fe1190a5977979c410811789a855ee"} Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.161144 4725 scope.go:117] "RemoveContainer" containerID="727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.176614 4725 scope.go:117] "RemoveContainer" containerID="b31fac5f44b4162e96c66e5470ca22c93f012780a6fa36e8b70b7d6bbc60c795" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.191742 4725 scope.go:117] "RemoveContainer" containerID="e42a153bd7c83a09b6e1dfac3e7e8193069bac7faafc7b49a55679b9e33e10b2" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.205542 4725 scope.go:117] "RemoveContainer" containerID="727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233" Oct 02 11:32:52 crc kubenswrapper[4725]: E1002 11:32:52.205995 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233\": container with ID starting with 727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233 not found: ID does not exist" containerID="727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.206106 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233"} err="failed to get container status \"727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233\": rpc error: code = NotFound desc = could not find container \"727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233\": container with ID starting with 727a4f5df1aa7e2791eddaf74a456571bd222ce955cdd7ad4998ec629b717233 not found: ID does not exist" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.206245 4725 scope.go:117] "RemoveContainer" containerID="b31fac5f44b4162e96c66e5470ca22c93f012780a6fa36e8b70b7d6bbc60c795" Oct 02 11:32:52 crc kubenswrapper[4725]: E1002 11:32:52.206628 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31fac5f44b4162e96c66e5470ca22c93f012780a6fa36e8b70b7d6bbc60c795\": container with ID starting with b31fac5f44b4162e96c66e5470ca22c93f012780a6fa36e8b70b7d6bbc60c795 not found: ID does not exist" containerID="b31fac5f44b4162e96c66e5470ca22c93f012780a6fa36e8b70b7d6bbc60c795" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.206649 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31fac5f44b4162e96c66e5470ca22c93f012780a6fa36e8b70b7d6bbc60c795"} err="failed to get container status \"b31fac5f44b4162e96c66e5470ca22c93f012780a6fa36e8b70b7d6bbc60c795\": rpc error: code = NotFound desc = could not find container \"b31fac5f44b4162e96c66e5470ca22c93f012780a6fa36e8b70b7d6bbc60c795\": container with ID starting with b31fac5f44b4162e96c66e5470ca22c93f012780a6fa36e8b70b7d6bbc60c795 not found: ID does not exist" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.206702 4725 scope.go:117] "RemoveContainer" containerID="e42a153bd7c83a09b6e1dfac3e7e8193069bac7faafc7b49a55679b9e33e10b2" Oct 02 11:32:52 crc kubenswrapper[4725]: E1002 11:32:52.207054 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42a153bd7c83a09b6e1dfac3e7e8193069bac7faafc7b49a55679b9e33e10b2\": container with ID starting with e42a153bd7c83a09b6e1dfac3e7e8193069bac7faafc7b49a55679b9e33e10b2 not found: ID does not exist" containerID="e42a153bd7c83a09b6e1dfac3e7e8193069bac7faafc7b49a55679b9e33e10b2" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.207156 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42a153bd7c83a09b6e1dfac3e7e8193069bac7faafc7b49a55679b9e33e10b2"} err="failed to get container status \"e42a153bd7c83a09b6e1dfac3e7e8193069bac7faafc7b49a55679b9e33e10b2\": rpc error: code = NotFound desc = could not find container \"e42a153bd7c83a09b6e1dfac3e7e8193069bac7faafc7b49a55679b9e33e10b2\": container with ID starting with e42a153bd7c83a09b6e1dfac3e7e8193069bac7faafc7b49a55679b9e33e10b2 not found: ID does not exist" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.227811 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-utilities\") pod \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.228108 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk95m\" (UniqueName: \"kubernetes.io/projected/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-kube-api-access-vk95m\") pod \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.228346 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-catalog-content\") pod \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\" (UID: \"8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b\") " Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.229088 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-utilities" (OuterVolumeSpecName: "utilities") pod "8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" (UID: "8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.233642 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-kube-api-access-vk95m" (OuterVolumeSpecName: "kube-api-access-vk95m") pod "8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" (UID: "8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b"). InnerVolumeSpecName "kube-api-access-vk95m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.242996 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" (UID: "8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.328987 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.329278 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk95m\" (UniqueName: \"kubernetes.io/projected/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-kube-api-access-vk95m\") on node \"crc\" DevicePath \"\"" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.329377 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.495411 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f5wcl"] Oct 02 11:32:52 crc kubenswrapper[4725]: I1002 11:32:52.498832 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f5wcl"] Oct 02 11:32:53 crc kubenswrapper[4725]: I1002 11:32:53.275292 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" path="/var/lib/kubelet/pods/8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b/volumes" Oct 02 11:32:55 crc kubenswrapper[4725]: I1002 11:32:55.369302 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:32:55 crc kubenswrapper[4725]: I1002 11:32:55.750149 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:32:55 crc kubenswrapper[4725]: I1002 11:32:55.786699 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:32:57 crc kubenswrapper[4725]: I1002 11:32:57.291908 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2wnr"] Oct 02 11:32:57 crc kubenswrapper[4725]: I1002 11:32:57.293375 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l2wnr" podUID="affba198-b83b-4263-a35d-ef5a3cc41852" containerName="registry-server" containerID="cri-o://786448a795b5a4baaba19bacf4ccbe836c12ac03603c10998dfe7d06e724e96f" gracePeriod=2 Oct 02 11:32:58 crc kubenswrapper[4725]: I1002 11:32:58.743245 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:32:58 crc kubenswrapper[4725]: I1002 11:32:58.805451 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:32:59 crc kubenswrapper[4725]: I1002 11:32:59.888662 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-22dkc"] Oct 02 11:33:00 crc kubenswrapper[4725]: I1002 11:33:00.200445 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-22dkc" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerName="registry-server" containerID="cri-o://05de864d1314b6293bbff83517fbccd94fd619ad5cedce6f4b907f1192fc9e5d" gracePeriod=2 Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.207280 4725 generic.go:334] "Generic (PLEG): container finished" podID="affba198-b83b-4263-a35d-ef5a3cc41852" containerID="786448a795b5a4baaba19bacf4ccbe836c12ac03603c10998dfe7d06e724e96f" exitCode=0 Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.207395 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wnr" event={"ID":"affba198-b83b-4263-a35d-ef5a3cc41852","Type":"ContainerDied","Data":"786448a795b5a4baaba19bacf4ccbe836c12ac03603c10998dfe7d06e724e96f"} Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.209991 4725 generic.go:334] "Generic (PLEG): container finished" podID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerID="05de864d1314b6293bbff83517fbccd94fd619ad5cedce6f4b907f1192fc9e5d" exitCode=0 Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.210023 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22dkc" event={"ID":"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a","Type":"ContainerDied","Data":"05de864d1314b6293bbff83517fbccd94fd619ad5cedce6f4b907f1192fc9e5d"} Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.536421 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.637099 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.734408 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-utilities\") pod \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.734574 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-catalog-content\") pod \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.734614 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl6m9\" (UniqueName: \"kubernetes.io/projected/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-kube-api-access-fl6m9\") pod \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\" (UID: \"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a\") " Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.735659 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-utilities" (OuterVolumeSpecName: "utilities") pod "6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" (UID: "6b2025e3-9e4b-4b71-8906-cc1e635f6e3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.741900 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-kube-api-access-fl6m9" (OuterVolumeSpecName: "kube-api-access-fl6m9") pod "6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" (UID: "6b2025e3-9e4b-4b71-8906-cc1e635f6e3a"). InnerVolumeSpecName "kube-api-access-fl6m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.813928 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" (UID: "6b2025e3-9e4b-4b71-8906-cc1e635f6e3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.835991 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-catalog-content\") pod \"affba198-b83b-4263-a35d-ef5a3cc41852\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.836109 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbqs2\" (UniqueName: \"kubernetes.io/projected/affba198-b83b-4263-a35d-ef5a3cc41852-kube-api-access-mbqs2\") pod \"affba198-b83b-4263-a35d-ef5a3cc41852\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.836249 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-utilities\") pod \"affba198-b83b-4263-a35d-ef5a3cc41852\" (UID: \"affba198-b83b-4263-a35d-ef5a3cc41852\") " Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.836516 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.836561 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.836578 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl6m9\" (UniqueName: \"kubernetes.io/projected/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a-kube-api-access-fl6m9\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.837504 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-utilities" (OuterVolumeSpecName: "utilities") pod "affba198-b83b-4263-a35d-ef5a3cc41852" (UID: "affba198-b83b-4263-a35d-ef5a3cc41852"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.839991 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/affba198-b83b-4263-a35d-ef5a3cc41852-kube-api-access-mbqs2" (OuterVolumeSpecName: "kube-api-access-mbqs2") pod "affba198-b83b-4263-a35d-ef5a3cc41852" (UID: "affba198-b83b-4263-a35d-ef5a3cc41852"). InnerVolumeSpecName "kube-api-access-mbqs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.880869 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "affba198-b83b-4263-a35d-ef5a3cc41852" (UID: "affba198-b83b-4263-a35d-ef5a3cc41852"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.937905 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.937951 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affba198-b83b-4263-a35d-ef5a3cc41852-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:01 crc kubenswrapper[4725]: I1002 11:33:01.937969 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbqs2\" (UniqueName: \"kubernetes.io/projected/affba198-b83b-4263-a35d-ef5a3cc41852-kube-api-access-mbqs2\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.216028 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wnr" event={"ID":"affba198-b83b-4263-a35d-ef5a3cc41852","Type":"ContainerDied","Data":"0777d591079d398f573c2cfc9b3889caecfa4bd704b42407fc2838cc1511262c"} Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.216081 4725 scope.go:117] "RemoveContainer" containerID="786448a795b5a4baaba19bacf4ccbe836c12ac03603c10998dfe7d06e724e96f" Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.216209 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wnr" Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.224583 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22dkc" event={"ID":"6b2025e3-9e4b-4b71-8906-cc1e635f6e3a","Type":"ContainerDied","Data":"780d3a0c221b2fda88354d1a3efe71d5b853b236893f14456bef4b565a58a7b5"} Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.224674 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22dkc" Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.246389 4725 scope.go:117] "RemoveContainer" containerID="ea3c468a063c3840c70b45573a451071f30a6716291b5481bf249264e7a5671a" Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.246817 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2wnr"] Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.252837 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l2wnr"] Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.257826 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-22dkc"] Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.260223 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-22dkc"] Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.284968 4725 scope.go:117] "RemoveContainer" containerID="11b904cc8f7382880396a2ab6065e3badb29b50dd84bf9f6d15396a258dd3a47" Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.301276 4725 scope.go:117] "RemoveContainer" containerID="05de864d1314b6293bbff83517fbccd94fd619ad5cedce6f4b907f1192fc9e5d" Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.315163 4725 scope.go:117] "RemoveContainer" containerID="90ad44b59d5a810202ef1934408d35e126c11fbe8b59ee4a2e3ad5a6ed389420" Oct 02 11:33:02 crc kubenswrapper[4725]: I1002 11:33:02.327613 4725 scope.go:117] "RemoveContainer" containerID="d2804e7fb022eb9658fc093fc55d67cb05041982a4f957d54f4e67cadf0aa9a7" Oct 02 11:33:03 crc kubenswrapper[4725]: I1002 11:33:03.274160 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" path="/var/lib/kubelet/pods/6b2025e3-9e4b-4b71-8906-cc1e635f6e3a/volumes" Oct 02 11:33:03 crc kubenswrapper[4725]: I1002 11:33:03.274820 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="affba198-b83b-4263-a35d-ef5a3cc41852" path="/var/lib/kubelet/pods/affba198-b83b-4263-a35d-ef5a3cc41852/volumes" Oct 02 11:33:15 crc kubenswrapper[4725]: I1002 11:33:15.980077 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x5npd"] Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.002436 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" podUID="b79b9e09-2453-4a58-af84-0732c5f7892d" containerName="oauth-openshift" containerID="cri-o://7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9" gracePeriod=15 Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.348195 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381412 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-847475b798-rp69m"] Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381678 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b79b9e09-2453-4a58-af84-0732c5f7892d" containerName="oauth-openshift" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381692 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79b9e09-2453-4a58-af84-0732c5f7892d" containerName="oauth-openshift" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381705 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" containerName="extract-content" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381713 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" containerName="extract-content" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381740 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381749 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381760 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerName="extract-utilities" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381767 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerName="extract-utilities" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381778 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" containerName="extract-utilities" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381786 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" containerName="extract-utilities" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381797 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affba198-b83b-4263-a35d-ef5a3cc41852" containerName="extract-content" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381805 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="affba198-b83b-4263-a35d-ef5a3cc41852" containerName="extract-content" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381814 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381823 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381833 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affba198-b83b-4263-a35d-ef5a3cc41852" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381843 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="affba198-b83b-4263-a35d-ef5a3cc41852" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381855 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affba198-b83b-4263-a35d-ef5a3cc41852" containerName="extract-utilities" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381862 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="affba198-b83b-4263-a35d-ef5a3cc41852" containerName="extract-utilities" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381873 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a448dc2a-523e-40ce-a363-fd6c1d64b700" containerName="pruner" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381881 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a448dc2a-523e-40ce-a363-fd6c1d64b700" containerName="pruner" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381890 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerName="extract-utilities" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381898 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerName="extract-utilities" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381910 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38172d5-48b9-4461-aa8b-7f4384bbed88" containerName="pruner" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381917 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38172d5-48b9-4461-aa8b-7f4384bbed88" containerName="pruner" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381928 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerName="extract-content" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381934 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerName="extract-content" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381943 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerName="extract-content" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381950 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerName="extract-content" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.381962 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.381970 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.382083 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="affba198-b83b-4263-a35d-ef5a3cc41852" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.382095 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f8d0ae5-0192-4a4f-92ce-f93cf4dc306b" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.382112 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38172d5-48b9-4461-aa8b-7f4384bbed88" containerName="pruner" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.382122 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b79b9e09-2453-4a58-af84-0732c5f7892d" containerName="oauth-openshift" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.382132 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a448dc2a-523e-40ce-a363-fd6c1d64b700" containerName="pruner" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.382141 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="883bf189-bd31-45eb-ac44-35fe1b065be7" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.382150 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b2025e3-9e4b-4b71-8906-cc1e635f6e3a" containerName="registry-server" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.382635 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.392342 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-847475b798-rp69m"] Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.463668 4725 generic.go:334] "Generic (PLEG): container finished" podID="b79b9e09-2453-4a58-af84-0732c5f7892d" containerID="7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9" exitCode=0 Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.463736 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" event={"ID":"b79b9e09-2453-4a58-af84-0732c5f7892d","Type":"ContainerDied","Data":"7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9"} Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.463776 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" event={"ID":"b79b9e09-2453-4a58-af84-0732c5f7892d","Type":"ContainerDied","Data":"94eebfc125067f4f04bcd1f162adba0683f033de7c9831bd917728351d94f8d0"} Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.463786 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x5npd" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.463794 4725 scope.go:117] "RemoveContainer" containerID="7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.488581 4725 scope.go:117] "RemoveContainer" containerID="7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9" Oct 02 11:33:41 crc kubenswrapper[4725]: E1002 11:33:41.489294 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9\": container with ID starting with 7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9 not found: ID does not exist" containerID="7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.489391 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9"} err="failed to get container status \"7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9\": rpc error: code = NotFound desc = could not find container \"7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9\": container with ID starting with 7e3c29c57dd441eb11086c735a02022f6f9ab4e08e9eb133a17d30b967ddb0c9 not found: ID does not exist" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523479 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-cliconfig\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523522 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-provider-selection\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523569 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-service-ca\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523602 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-serving-cert\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523630 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-error\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523659 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-login\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523684 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-ocp-branding-template\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523708 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-dir\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523755 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-idp-0-file-data\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523795 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p8r2\" (UniqueName: \"kubernetes.io/projected/b79b9e09-2453-4a58-af84-0732c5f7892d-kube-api-access-6p8r2\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523823 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-trusted-ca-bundle\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523846 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-session\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523869 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-router-certs\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.523894 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-policies\") pod \"b79b9e09-2453-4a58-af84-0732c5f7892d\" (UID: \"b79b9e09-2453-4a58-af84-0732c5f7892d\") " Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524019 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-session\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524057 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-router-certs\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524080 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-template-login\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524108 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs2t7\" (UniqueName: \"kubernetes.io/projected/17cb984a-635b-4936-8b34-4895abefbb95-kube-api-access-zs2t7\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524130 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-audit-policies\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524153 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524177 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524197 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524220 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524246 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-service-ca\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524266 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17cb984a-635b-4936-8b34-4895abefbb95-audit-dir\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524299 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524324 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524343 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-template-error\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.524405 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.525582 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.525606 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.526261 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.526766 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.530009 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.534184 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.534853 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79b9e09-2453-4a58-af84-0732c5f7892d-kube-api-access-6p8r2" (OuterVolumeSpecName: "kube-api-access-6p8r2") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "kube-api-access-6p8r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.544094 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.545446 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.545793 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.545832 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.546177 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.546321 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b79b9e09-2453-4a58-af84-0732c5f7892d" (UID: "b79b9e09-2453-4a58-af84-0732c5f7892d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625160 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625222 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625249 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-template-error\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625300 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-session\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625338 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-router-certs\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625360 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-template-login\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625389 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs2t7\" (UniqueName: \"kubernetes.io/projected/17cb984a-635b-4936-8b34-4895abefbb95-kube-api-access-zs2t7\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625414 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-audit-policies\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625440 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625467 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625490 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625514 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625542 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-service-ca\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625570 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17cb984a-635b-4936-8b34-4895abefbb95-audit-dir\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625630 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625644 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625657 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625670 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625691 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625704 4725 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625742 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625757 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p8r2\" (UniqueName: \"kubernetes.io/projected/b79b9e09-2453-4a58-af84-0732c5f7892d-kube-api-access-6p8r2\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625770 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625782 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625792 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625803 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625814 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625826 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b79b9e09-2453-4a58-af84-0732c5f7892d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.625877 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17cb984a-635b-4936-8b34-4895abefbb95-audit-dir\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.627132 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-audit-policies\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.627235 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.627417 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.627488 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-service-ca\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.629364 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.629391 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.630233 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-session\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.630320 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.630472 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-template-error\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.631910 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.632227 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-system-router-certs\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.633448 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17cb984a-635b-4936-8b34-4895abefbb95-v4-0-config-user-template-login\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.642831 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs2t7\" (UniqueName: \"kubernetes.io/projected/17cb984a-635b-4936-8b34-4895abefbb95-kube-api-access-zs2t7\") pod \"oauth-openshift-847475b798-rp69m\" (UID: \"17cb984a-635b-4936-8b34-4895abefbb95\") " pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.711047 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.798579 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x5npd"] Oct 02 11:33:41 crc kubenswrapper[4725]: I1002 11:33:41.801179 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x5npd"] Oct 02 11:33:42 crc kubenswrapper[4725]: I1002 11:33:42.127395 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-847475b798-rp69m"] Oct 02 11:33:42 crc kubenswrapper[4725]: W1002 11:33:42.141876 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17cb984a_635b_4936_8b34_4895abefbb95.slice/crio-87d4cef5a96747023b202e3e05d1a33855bdc3c4339b8f2610565fd4a6831824 WatchSource:0}: Error finding container 87d4cef5a96747023b202e3e05d1a33855bdc3c4339b8f2610565fd4a6831824: Status 404 returned error can't find the container with id 87d4cef5a96747023b202e3e05d1a33855bdc3c4339b8f2610565fd4a6831824 Oct 02 11:33:42 crc kubenswrapper[4725]: I1002 11:33:42.469805 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-847475b798-rp69m" event={"ID":"17cb984a-635b-4936-8b34-4895abefbb95","Type":"ContainerStarted","Data":"24218e39c0cb3d38aca0149aac8489a8047526ae1366f01b49afafbc86758241"} Oct 02 11:33:42 crc kubenswrapper[4725]: I1002 11:33:42.469853 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-847475b798-rp69m" event={"ID":"17cb984a-635b-4936-8b34-4895abefbb95","Type":"ContainerStarted","Data":"87d4cef5a96747023b202e3e05d1a33855bdc3c4339b8f2610565fd4a6831824"} Oct 02 11:33:42 crc kubenswrapper[4725]: I1002 11:33:42.470119 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:42 crc kubenswrapper[4725]: I1002 11:33:42.494526 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-847475b798-rp69m" podStartSLOduration=26.494505385 podStartE2EDuration="26.494505385s" podCreationTimestamp="2025-10-02 11:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:33:42.490918506 +0000 UTC m=+342.398417989" watchObservedRunningTime="2025-10-02 11:33:42.494505385 +0000 UTC m=+342.402004848" Oct 02 11:33:42 crc kubenswrapper[4725]: I1002 11:33:42.804903 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-847475b798-rp69m" Oct 02 11:33:43 crc kubenswrapper[4725]: I1002 11:33:43.275032 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79b9e09-2453-4a58-af84-0732c5f7892d" path="/var/lib/kubelet/pods/b79b9e09-2453-4a58-af84-0732c5f7892d/volumes" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.393079 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kf8zm"] Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.393834 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kf8zm" podUID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" containerName="registry-server" containerID="cri-o://25e0a078e82c47fbc14ade08201ae7ceb99eecb756c784015dac8b8cdbd881eb" gracePeriod=30 Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.404484 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxc7r"] Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.404694 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zxc7r" podUID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" containerName="registry-server" containerID="cri-o://0d813273c0504b97f2425fa654969ff189741a39fa71571b739ef256659a3db7" gracePeriod=30 Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.409821 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5sv8"] Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.410340 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" podUID="9a84f459-c277-472f-a84c-328e8523f8e0" containerName="marketplace-operator" containerID="cri-o://d20eabf9be67f15819f802ac50bfec5e39aae79c66df44f857a48965421259b0" gracePeriod=30 Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.416950 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btvqf"] Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.417169 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-btvqf" podUID="f796b79e-6656-4260-8a9c-7ed986582af9" containerName="registry-server" containerID="cri-o://fb97959edacd9888ff39cf27c7e790c2a6c209b5b80f613e498555b3b72b0a8e" gracePeriod=30 Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.430341 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcv4l"] Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.430392 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5zrbg"] Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.430966 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.431366 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dcv4l" podUID="0c9faeda-7818-4db8-95db-3b95a8458ee7" containerName="registry-server" containerID="cri-o://1284c551828d47f960172a7c03429ad6da1ceb693d92fffe9e6dcd3769a8df3e" gracePeriod=30 Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.442803 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5zrbg"] Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.599411 4725 generic.go:334] "Generic (PLEG): container finished" podID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" containerID="25e0a078e82c47fbc14ade08201ae7ceb99eecb756c784015dac8b8cdbd881eb" exitCode=0 Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.599523 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kf8zm" event={"ID":"1daaf185-adc6-4e07-a8ea-a22ffd4c505a","Type":"ContainerDied","Data":"25e0a078e82c47fbc14ade08201ae7ceb99eecb756c784015dac8b8cdbd881eb"} Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.603663 4725 generic.go:334] "Generic (PLEG): container finished" podID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" containerID="0d813273c0504b97f2425fa654969ff189741a39fa71571b739ef256659a3db7" exitCode=0 Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.603777 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxc7r" event={"ID":"9c33e8b3-403f-47e9-8323-f31c5c5195d7","Type":"ContainerDied","Data":"0d813273c0504b97f2425fa654969ff189741a39fa71571b739ef256659a3db7"} Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.604685 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b1730b0-4eb3-4a40-86a6-2908a9c9acb2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5zrbg\" (UID: \"6b1730b0-4eb3-4a40-86a6-2908a9c9acb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.604813 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgjht\" (UniqueName: \"kubernetes.io/projected/6b1730b0-4eb3-4a40-86a6-2908a9c9acb2-kube-api-access-lgjht\") pod \"marketplace-operator-79b997595-5zrbg\" (UID: \"6b1730b0-4eb3-4a40-86a6-2908a9c9acb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.604851 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6b1730b0-4eb3-4a40-86a6-2908a9c9acb2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5zrbg\" (UID: \"6b1730b0-4eb3-4a40-86a6-2908a9c9acb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.607258 4725 generic.go:334] "Generic (PLEG): container finished" podID="0c9faeda-7818-4db8-95db-3b95a8458ee7" containerID="1284c551828d47f960172a7c03429ad6da1ceb693d92fffe9e6dcd3769a8df3e" exitCode=0 Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.607313 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcv4l" event={"ID":"0c9faeda-7818-4db8-95db-3b95a8458ee7","Type":"ContainerDied","Data":"1284c551828d47f960172a7c03429ad6da1ceb693d92fffe9e6dcd3769a8df3e"} Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.608825 4725 generic.go:334] "Generic (PLEG): container finished" podID="9a84f459-c277-472f-a84c-328e8523f8e0" containerID="d20eabf9be67f15819f802ac50bfec5e39aae79c66df44f857a48965421259b0" exitCode=0 Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.608895 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" event={"ID":"9a84f459-c277-472f-a84c-328e8523f8e0","Type":"ContainerDied","Data":"d20eabf9be67f15819f802ac50bfec5e39aae79c66df44f857a48965421259b0"} Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.612571 4725 generic.go:334] "Generic (PLEG): container finished" podID="f796b79e-6656-4260-8a9c-7ed986582af9" containerID="fb97959edacd9888ff39cf27c7e790c2a6c209b5b80f613e498555b3b72b0a8e" exitCode=0 Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.612637 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvqf" event={"ID":"f796b79e-6656-4260-8a9c-7ed986582af9","Type":"ContainerDied","Data":"fb97959edacd9888ff39cf27c7e790c2a6c209b5b80f613e498555b3b72b0a8e"} Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.705587 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgjht\" (UniqueName: \"kubernetes.io/projected/6b1730b0-4eb3-4a40-86a6-2908a9c9acb2-kube-api-access-lgjht\") pod \"marketplace-operator-79b997595-5zrbg\" (UID: \"6b1730b0-4eb3-4a40-86a6-2908a9c9acb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.705930 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6b1730b0-4eb3-4a40-86a6-2908a9c9acb2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5zrbg\" (UID: \"6b1730b0-4eb3-4a40-86a6-2908a9c9acb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.705965 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b1730b0-4eb3-4a40-86a6-2908a9c9acb2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5zrbg\" (UID: \"6b1730b0-4eb3-4a40-86a6-2908a9c9acb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.707263 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b1730b0-4eb3-4a40-86a6-2908a9c9acb2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5zrbg\" (UID: \"6b1730b0-4eb3-4a40-86a6-2908a9c9acb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.714258 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6b1730b0-4eb3-4a40-86a6-2908a9c9acb2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5zrbg\" (UID: \"6b1730b0-4eb3-4a40-86a6-2908a9c9acb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.726791 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgjht\" (UniqueName: \"kubernetes.io/projected/6b1730b0-4eb3-4a40-86a6-2908a9c9acb2-kube-api-access-lgjht\") pod \"marketplace-operator-79b997595-5zrbg\" (UID: \"6b1730b0-4eb3-4a40-86a6-2908a9c9acb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.762177 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.844813 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.854157 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:34:03 crc kubenswrapper[4725]: I1002 11:34:03.857031 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.008404 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhzmj\" (UniqueName: \"kubernetes.io/projected/9a84f459-c277-472f-a84c-328e8523f8e0-kube-api-access-mhzmj\") pod \"9a84f459-c277-472f-a84c-328e8523f8e0\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.008446 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-utilities\") pod \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.008482 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-operator-metrics\") pod \"9a84f459-c277-472f-a84c-328e8523f8e0\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.008533 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-utilities\") pod \"f796b79e-6656-4260-8a9c-7ed986582af9\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.008574 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-catalog-content\") pod \"f796b79e-6656-4260-8a9c-7ed986582af9\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.008607 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfn79\" (UniqueName: \"kubernetes.io/projected/f796b79e-6656-4260-8a9c-7ed986582af9-kube-api-access-vfn79\") pod \"f796b79e-6656-4260-8a9c-7ed986582af9\" (UID: \"f796b79e-6656-4260-8a9c-7ed986582af9\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.008632 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-trusted-ca\") pod \"9a84f459-c277-472f-a84c-328e8523f8e0\" (UID: \"9a84f459-c277-472f-a84c-328e8523f8e0\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.008666 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-catalog-content\") pod \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.008715 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jzw5\" (UniqueName: \"kubernetes.io/projected/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-kube-api-access-9jzw5\") pod \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\" (UID: \"1daaf185-adc6-4e07-a8ea-a22ffd4c505a\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.009301 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-utilities" (OuterVolumeSpecName: "utilities") pod "f796b79e-6656-4260-8a9c-7ed986582af9" (UID: "f796b79e-6656-4260-8a9c-7ed986582af9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.009616 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9a84f459-c277-472f-a84c-328e8523f8e0" (UID: "9a84f459-c277-472f-a84c-328e8523f8e0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.010410 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-utilities" (OuterVolumeSpecName: "utilities") pod "1daaf185-adc6-4e07-a8ea-a22ffd4c505a" (UID: "1daaf185-adc6-4e07-a8ea-a22ffd4c505a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.013917 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f796b79e-6656-4260-8a9c-7ed986582af9-kube-api-access-vfn79" (OuterVolumeSpecName: "kube-api-access-vfn79") pod "f796b79e-6656-4260-8a9c-7ed986582af9" (UID: "f796b79e-6656-4260-8a9c-7ed986582af9"). InnerVolumeSpecName "kube-api-access-vfn79". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.014019 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-kube-api-access-9jzw5" (OuterVolumeSpecName: "kube-api-access-9jzw5") pod "1daaf185-adc6-4e07-a8ea-a22ffd4c505a" (UID: "1daaf185-adc6-4e07-a8ea-a22ffd4c505a"). InnerVolumeSpecName "kube-api-access-9jzw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.014112 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9a84f459-c277-472f-a84c-328e8523f8e0" (UID: "9a84f459-c277-472f-a84c-328e8523f8e0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.014335 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a84f459-c277-472f-a84c-328e8523f8e0-kube-api-access-mhzmj" (OuterVolumeSpecName: "kube-api-access-mhzmj") pod "9a84f459-c277-472f-a84c-328e8523f8e0" (UID: "9a84f459-c277-472f-a84c-328e8523f8e0"). InnerVolumeSpecName "kube-api-access-mhzmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.026668 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f796b79e-6656-4260-8a9c-7ed986582af9" (UID: "f796b79e-6656-4260-8a9c-7ed986582af9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.073006 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1daaf185-adc6-4e07-a8ea-a22ffd4c505a" (UID: "1daaf185-adc6-4e07-a8ea-a22ffd4c505a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.109928 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.109972 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfn79\" (UniqueName: \"kubernetes.io/projected/f796b79e-6656-4260-8a9c-7ed986582af9-kube-api-access-vfn79\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.109987 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.109999 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.110009 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jzw5\" (UniqueName: \"kubernetes.io/projected/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-kube-api-access-9jzw5\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.110019 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhzmj\" (UniqueName: \"kubernetes.io/projected/9a84f459-c277-472f-a84c-328e8523f8e0-kube-api-access-mhzmj\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.110030 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1daaf185-adc6-4e07-a8ea-a22ffd4c505a-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.110041 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a84f459-c277-472f-a84c-328e8523f8e0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.110051 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f796b79e-6656-4260-8a9c-7ed986582af9-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.242577 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5zrbg"] Oct 02 11:34:04 crc kubenswrapper[4725]: W1002 11:34:04.247967 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b1730b0_4eb3_4a40_86a6_2908a9c9acb2.slice/crio-ee243cb519b1ac8615ae98f0fe9e1ae92d0901683479ba8afa599a8f5a29b533 WatchSource:0}: Error finding container ee243cb519b1ac8615ae98f0fe9e1ae92d0901683479ba8afa599a8f5a29b533: Status 404 returned error can't find the container with id ee243cb519b1ac8615ae98f0fe9e1ae92d0901683479ba8afa599a8f5a29b533 Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.424558 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.457651 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.615126 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvbws\" (UniqueName: \"kubernetes.io/projected/9c33e8b3-403f-47e9-8323-f31c5c5195d7-kube-api-access-bvbws\") pod \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.616192 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-utilities\") pod \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.616234 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-utilities" (OuterVolumeSpecName: "utilities") pod "9c33e8b3-403f-47e9-8323-f31c5c5195d7" (UID: "9c33e8b3-403f-47e9-8323-f31c5c5195d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.616356 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-utilities\") pod \"0c9faeda-7818-4db8-95db-3b95a8458ee7\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.616426 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-catalog-content\") pod \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\" (UID: \"9c33e8b3-403f-47e9-8323-f31c5c5195d7\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.616491 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-catalog-content\") pod \"0c9faeda-7818-4db8-95db-3b95a8458ee7\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.616536 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mm4m\" (UniqueName: \"kubernetes.io/projected/0c9faeda-7818-4db8-95db-3b95a8458ee7-kube-api-access-8mm4m\") pod \"0c9faeda-7818-4db8-95db-3b95a8458ee7\" (UID: \"0c9faeda-7818-4db8-95db-3b95a8458ee7\") " Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.616786 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.617081 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-utilities" (OuterVolumeSpecName: "utilities") pod "0c9faeda-7818-4db8-95db-3b95a8458ee7" (UID: "0c9faeda-7818-4db8-95db-3b95a8458ee7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.618681 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" event={"ID":"6b1730b0-4eb3-4a40-86a6-2908a9c9acb2","Type":"ContainerStarted","Data":"7a3199cf495e9c5f2b3181325853f82528664072d939527ab189bef4040a9ac6"} Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.619973 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c9faeda-7818-4db8-95db-3b95a8458ee7-kube-api-access-8mm4m" (OuterVolumeSpecName: "kube-api-access-8mm4m") pod "0c9faeda-7818-4db8-95db-3b95a8458ee7" (UID: "0c9faeda-7818-4db8-95db-3b95a8458ee7"). InnerVolumeSpecName "kube-api-access-8mm4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.620445 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5zrbg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.621332 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" podUID="6b1730b0-4eb3-4a40-86a6-2908a9c9acb2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.622364 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxc7r" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.624398 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcv4l" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.625707 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.625839 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" event={"ID":"6b1730b0-4eb3-4a40-86a6-2908a9c9acb2","Type":"ContainerStarted","Data":"ee243cb519b1ac8615ae98f0fe9e1ae92d0901683479ba8afa599a8f5a29b533"} Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.625964 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.625981 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxc7r" event={"ID":"9c33e8b3-403f-47e9-8323-f31c5c5195d7","Type":"ContainerDied","Data":"51a96ca2f818ce4ac8bfbc9d6d3a98c7781634293644d8e5394c7a5b6de04eef"} Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.626003 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcv4l" event={"ID":"0c9faeda-7818-4db8-95db-3b95a8458ee7","Type":"ContainerDied","Data":"cec0a23c618a6ed3e43166173810a38487efac9cc42def082c553c253e52b36b"} Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.626026 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m5sv8" event={"ID":"9a84f459-c277-472f-a84c-328e8523f8e0","Type":"ContainerDied","Data":"783b9681f21f76e7245dec8f5f4340292f98ee23eeda38d3ae42d83a46d047f7"} Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.626057 4725 scope.go:117] "RemoveContainer" containerID="0d813273c0504b97f2425fa654969ff189741a39fa71571b739ef256659a3db7" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.634537 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c33e8b3-403f-47e9-8323-f31c5c5195d7-kube-api-access-bvbws" (OuterVolumeSpecName: "kube-api-access-bvbws") pod "9c33e8b3-403f-47e9-8323-f31c5c5195d7" (UID: "9c33e8b3-403f-47e9-8323-f31c5c5195d7"). InnerVolumeSpecName "kube-api-access-bvbws". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.634769 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvqf" event={"ID":"f796b79e-6656-4260-8a9c-7ed986582af9","Type":"ContainerDied","Data":"007807e5b7a281285311a688aaf26b4e573c08d89a35a534cf82242f13c1928f"} Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.634801 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btvqf" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.643503 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" podStartSLOduration=1.643486939 podStartE2EDuration="1.643486939s" podCreationTimestamp="2025-10-02 11:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:34:04.639786027 +0000 UTC m=+364.547285490" watchObservedRunningTime="2025-10-02 11:34:04.643486939 +0000 UTC m=+364.550986392" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.645804 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kf8zm" event={"ID":"1daaf185-adc6-4e07-a8ea-a22ffd4c505a","Type":"ContainerDied","Data":"3a5f734c75adb7b25734153b763bd0032a2fc9c8f61a3a8c865f8d26ab4bd594"} Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.647233 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kf8zm" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.669197 4725 scope.go:117] "RemoveContainer" containerID="6e81a920a6abe1de1b9be74dd8c0f92b5e3d8aef52fe658c07e0241d7f8458e5" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.689736 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5sv8"] Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.696351 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5sv8"] Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.703800 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c33e8b3-403f-47e9-8323-f31c5c5195d7" (UID: "9c33e8b3-403f-47e9-8323-f31c5c5195d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.711110 4725 scope.go:117] "RemoveContainer" containerID="06d16bd881ba34546c0691f0b8a5ababe28501043ce275eea4fe219a82f4fa47" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.718507 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c33e8b3-403f-47e9-8323-f31c5c5195d7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.718548 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mm4m\" (UniqueName: \"kubernetes.io/projected/0c9faeda-7818-4db8-95db-3b95a8458ee7-kube-api-access-8mm4m\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.718566 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvbws\" (UniqueName: \"kubernetes.io/projected/9c33e8b3-403f-47e9-8323-f31c5c5195d7-kube-api-access-bvbws\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.718580 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.719094 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btvqf"] Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.730790 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c9faeda-7818-4db8-95db-3b95a8458ee7" (UID: "0c9faeda-7818-4db8-95db-3b95a8458ee7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.734688 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-btvqf"] Oct 02 11:34:04 crc kubenswrapper[4725]: E1002 11:34:04.736985 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf796b79e_6656_4260_8a9c_7ed986582af9.slice/crio-007807e5b7a281285311a688aaf26b4e573c08d89a35a534cf82242f13c1928f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a84f459_c277_472f_a84c_328e8523f8e0.slice\": RecentStats: unable to find data in memory cache]" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.739357 4725 scope.go:117] "RemoveContainer" containerID="1284c551828d47f960172a7c03429ad6da1ceb693d92fffe9e6dcd3769a8df3e" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.741364 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kf8zm"] Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.744004 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kf8zm"] Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.751994 4725 scope.go:117] "RemoveContainer" containerID="534dbbfab4610491e030b5aebfdc439eb6d2a789ed84b3df02c47a8d95140452" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.769845 4725 scope.go:117] "RemoveContainer" containerID="a4fa08e8ce7dbeffff0c79bb7f5b7235cfdf3555212ead759ae6daf49286bff5" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.785763 4725 scope.go:117] "RemoveContainer" containerID="d20eabf9be67f15819f802ac50bfec5e39aae79c66df44f857a48965421259b0" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.798124 4725 scope.go:117] "RemoveContainer" containerID="fb97959edacd9888ff39cf27c7e790c2a6c209b5b80f613e498555b3b72b0a8e" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.810109 4725 scope.go:117] "RemoveContainer" containerID="9d49b36a5316696d99cb95c9214908a1c6490d208223ec4fde0a529b0624cf4c" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.822306 4725 scope.go:117] "RemoveContainer" containerID="7e234b8b6f75e0df996b58bc67fe935108c8063e2ae57f4515602b9531d5f8d2" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.823840 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9faeda-7818-4db8-95db-3b95a8458ee7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.839602 4725 scope.go:117] "RemoveContainer" containerID="25e0a078e82c47fbc14ade08201ae7ceb99eecb756c784015dac8b8cdbd881eb" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.853673 4725 scope.go:117] "RemoveContainer" containerID="f099f49d63e1a6770107e0a624726da0ed138cd9175a91f571ee6f4060de5796" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.870293 4725 scope.go:117] "RemoveContainer" containerID="bb962986eb150dc5b190b1c03f87f1e7d5ee2a02e825f48f773556892e129e3d" Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.955474 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcv4l"] Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.957913 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dcv4l"] Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.962189 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxc7r"] Oct 02 11:34:04 crc kubenswrapper[4725]: I1002 11:34:04.964696 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zxc7r"] Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163049 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6bwvh"] Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163259 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f796b79e-6656-4260-8a9c-7ed986582af9" containerName="extract-utilities" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163273 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f796b79e-6656-4260-8a9c-7ed986582af9" containerName="extract-utilities" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163287 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163295 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163304 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f796b79e-6656-4260-8a9c-7ed986582af9" containerName="extract-content" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163312 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f796b79e-6656-4260-8a9c-7ed986582af9" containerName="extract-content" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163324 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9faeda-7818-4db8-95db-3b95a8458ee7" containerName="extract-content" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163331 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9faeda-7818-4db8-95db-3b95a8458ee7" containerName="extract-content" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163342 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163349 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163361 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" containerName="extract-content" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163368 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" containerName="extract-content" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163378 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a84f459-c277-472f-a84c-328e8523f8e0" containerName="marketplace-operator" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163386 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a84f459-c277-472f-a84c-328e8523f8e0" containerName="marketplace-operator" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163397 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f796b79e-6656-4260-8a9c-7ed986582af9" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163404 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f796b79e-6656-4260-8a9c-7ed986582af9" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163413 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" containerName="extract-utilities" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163420 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" containerName="extract-utilities" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163428 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9faeda-7818-4db8-95db-3b95a8458ee7" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163436 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9faeda-7818-4db8-95db-3b95a8458ee7" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163446 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" containerName="extract-content" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163453 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" containerName="extract-content" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163464 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9faeda-7818-4db8-95db-3b95a8458ee7" containerName="extract-utilities" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163472 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9faeda-7818-4db8-95db-3b95a8458ee7" containerName="extract-utilities" Oct 02 11:34:05 crc kubenswrapper[4725]: E1002 11:34:05.163483 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" containerName="extract-utilities" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163490 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" containerName="extract-utilities" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163588 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9faeda-7818-4db8-95db-3b95a8458ee7" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163604 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163614 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163627 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a84f459-c277-472f-a84c-328e8523f8e0" containerName="marketplace-operator" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.163637 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f796b79e-6656-4260-8a9c-7ed986582af9" containerName="registry-server" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.164606 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.167502 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.171696 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bwvh"] Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.230142 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-catalog-content\") pod \"certified-operators-6bwvh\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.230205 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmp9\" (UniqueName: \"kubernetes.io/projected/2c0bff4f-9811-437e-9938-f5439d5a38b4-kube-api-access-mfmp9\") pod \"certified-operators-6bwvh\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.230253 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-utilities\") pod \"certified-operators-6bwvh\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.275070 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c9faeda-7818-4db8-95db-3b95a8458ee7" path="/var/lib/kubelet/pods/0c9faeda-7818-4db8-95db-3b95a8458ee7/volumes" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.276640 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1daaf185-adc6-4e07-a8ea-a22ffd4c505a" path="/var/lib/kubelet/pods/1daaf185-adc6-4e07-a8ea-a22ffd4c505a/volumes" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.278109 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a84f459-c277-472f-a84c-328e8523f8e0" path="/var/lib/kubelet/pods/9a84f459-c277-472f-a84c-328e8523f8e0/volumes" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.280228 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c33e8b3-403f-47e9-8323-f31c5c5195d7" path="/var/lib/kubelet/pods/9c33e8b3-403f-47e9-8323-f31c5c5195d7/volumes" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.281617 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f796b79e-6656-4260-8a9c-7ed986582af9" path="/var/lib/kubelet/pods/f796b79e-6656-4260-8a9c-7ed986582af9/volumes" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.331634 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfmp9\" (UniqueName: \"kubernetes.io/projected/2c0bff4f-9811-437e-9938-f5439d5a38b4-kube-api-access-mfmp9\") pod \"certified-operators-6bwvh\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.331713 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-utilities\") pod \"certified-operators-6bwvh\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.331813 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-catalog-content\") pod \"certified-operators-6bwvh\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.332303 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-catalog-content\") pod \"certified-operators-6bwvh\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.332409 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-utilities\") pod \"certified-operators-6bwvh\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.351405 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfmp9\" (UniqueName: \"kubernetes.io/projected/2c0bff4f-9811-437e-9938-f5439d5a38b4-kube-api-access-mfmp9\") pod \"certified-operators-6bwvh\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.480974 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.656059 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bwvh"] Oct 02 11:34:05 crc kubenswrapper[4725]: W1002 11:34:05.663112 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c0bff4f_9811_437e_9938_f5439d5a38b4.slice/crio-95ba0e66bb7d25b48523caa338c3127983fe45d982d4e74029e84a5c6f1d1f31 WatchSource:0}: Error finding container 95ba0e66bb7d25b48523caa338c3127983fe45d982d4e74029e84a5c6f1d1f31: Status 404 returned error can't find the container with id 95ba0e66bb7d25b48523caa338c3127983fe45d982d4e74029e84a5c6f1d1f31 Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.667949 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5zrbg" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.771282 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pn4rg"] Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.772412 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.774042 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.780795 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn4rg"] Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.938344 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe00bd7-b29e-4266-8a7d-64a79f500125-catalog-content\") pod \"community-operators-pn4rg\" (UID: \"abe00bd7-b29e-4266-8a7d-64a79f500125\") " pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.938673 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe00bd7-b29e-4266-8a7d-64a79f500125-utilities\") pod \"community-operators-pn4rg\" (UID: \"abe00bd7-b29e-4266-8a7d-64a79f500125\") " pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:05 crc kubenswrapper[4725]: I1002 11:34:05.938705 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmjcz\" (UniqueName: \"kubernetes.io/projected/abe00bd7-b29e-4266-8a7d-64a79f500125-kube-api-access-nmjcz\") pod \"community-operators-pn4rg\" (UID: \"abe00bd7-b29e-4266-8a7d-64a79f500125\") " pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.039528 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe00bd7-b29e-4266-8a7d-64a79f500125-catalog-content\") pod \"community-operators-pn4rg\" (UID: \"abe00bd7-b29e-4266-8a7d-64a79f500125\") " pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.039573 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe00bd7-b29e-4266-8a7d-64a79f500125-utilities\") pod \"community-operators-pn4rg\" (UID: \"abe00bd7-b29e-4266-8a7d-64a79f500125\") " pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.039644 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmjcz\" (UniqueName: \"kubernetes.io/projected/abe00bd7-b29e-4266-8a7d-64a79f500125-kube-api-access-nmjcz\") pod \"community-operators-pn4rg\" (UID: \"abe00bd7-b29e-4266-8a7d-64a79f500125\") " pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.040422 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe00bd7-b29e-4266-8a7d-64a79f500125-catalog-content\") pod \"community-operators-pn4rg\" (UID: \"abe00bd7-b29e-4266-8a7d-64a79f500125\") " pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.040617 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe00bd7-b29e-4266-8a7d-64a79f500125-utilities\") pod \"community-operators-pn4rg\" (UID: \"abe00bd7-b29e-4266-8a7d-64a79f500125\") " pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.059038 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmjcz\" (UniqueName: \"kubernetes.io/projected/abe00bd7-b29e-4266-8a7d-64a79f500125-kube-api-access-nmjcz\") pod \"community-operators-pn4rg\" (UID: \"abe00bd7-b29e-4266-8a7d-64a79f500125\") " pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.089490 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.465363 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn4rg"] Oct 02 11:34:06 crc kubenswrapper[4725]: W1002 11:34:06.471024 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabe00bd7_b29e_4266_8a7d_64a79f500125.slice/crio-9f1fdba12ac18c48a1721e40bc1c35641ea92811ceb28acab029dea792e38e58 WatchSource:0}: Error finding container 9f1fdba12ac18c48a1721e40bc1c35641ea92811ceb28acab029dea792e38e58: Status 404 returned error can't find the container with id 9f1fdba12ac18c48a1721e40bc1c35641ea92811ceb28acab029dea792e38e58 Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.667683 4725 generic.go:334] "Generic (PLEG): container finished" podID="abe00bd7-b29e-4266-8a7d-64a79f500125" containerID="da66082b794dbb8366cbf7ca311bff39c1b8254404e5783bf1be7904b5f22974" exitCode=0 Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.667789 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn4rg" event={"ID":"abe00bd7-b29e-4266-8a7d-64a79f500125","Type":"ContainerDied","Data":"da66082b794dbb8366cbf7ca311bff39c1b8254404e5783bf1be7904b5f22974"} Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.668029 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn4rg" event={"ID":"abe00bd7-b29e-4266-8a7d-64a79f500125","Type":"ContainerStarted","Data":"9f1fdba12ac18c48a1721e40bc1c35641ea92811ceb28acab029dea792e38e58"} Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.671232 4725 generic.go:334] "Generic (PLEG): container finished" podID="2c0bff4f-9811-437e-9938-f5439d5a38b4" containerID="8b9c1e1d8c80bd6f93f95f301c3a97aae5a6305da41b388a5385edaf77cb0d1d" exitCode=0 Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.671266 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bwvh" event={"ID":"2c0bff4f-9811-437e-9938-f5439d5a38b4","Type":"ContainerDied","Data":"8b9c1e1d8c80bd6f93f95f301c3a97aae5a6305da41b388a5385edaf77cb0d1d"} Oct 02 11:34:06 crc kubenswrapper[4725]: I1002 11:34:06.671294 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bwvh" event={"ID":"2c0bff4f-9811-437e-9938-f5439d5a38b4","Type":"ContainerStarted","Data":"95ba0e66bb7d25b48523caa338c3127983fe45d982d4e74029e84a5c6f1d1f31"} Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.566604 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lz4c7"] Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.567986 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.569933 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.576204 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz4c7"] Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.666658 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c456f806-c830-4eac-bb7b-5c5666bfbd77-catalog-content\") pod \"redhat-marketplace-lz4c7\" (UID: \"c456f806-c830-4eac-bb7b-5c5666bfbd77\") " pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.666712 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c456f806-c830-4eac-bb7b-5c5666bfbd77-utilities\") pod \"redhat-marketplace-lz4c7\" (UID: \"c456f806-c830-4eac-bb7b-5c5666bfbd77\") " pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.666758 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwprv\" (UniqueName: \"kubernetes.io/projected/c456f806-c830-4eac-bb7b-5c5666bfbd77-kube-api-access-xwprv\") pod \"redhat-marketplace-lz4c7\" (UID: \"c456f806-c830-4eac-bb7b-5c5666bfbd77\") " pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.683746 4725 generic.go:334] "Generic (PLEG): container finished" podID="abe00bd7-b29e-4266-8a7d-64a79f500125" containerID="15a82abdd3455f135e2fae355104a88d906aa68b9338880b3092f9726f5f50f8" exitCode=0 Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.683806 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn4rg" event={"ID":"abe00bd7-b29e-4266-8a7d-64a79f500125","Type":"ContainerDied","Data":"15a82abdd3455f135e2fae355104a88d906aa68b9338880b3092f9726f5f50f8"} Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.687318 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bwvh" event={"ID":"2c0bff4f-9811-437e-9938-f5439d5a38b4","Type":"ContainerStarted","Data":"29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9"} Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.767748 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c456f806-c830-4eac-bb7b-5c5666bfbd77-catalog-content\") pod \"redhat-marketplace-lz4c7\" (UID: \"c456f806-c830-4eac-bb7b-5c5666bfbd77\") " pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.767826 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c456f806-c830-4eac-bb7b-5c5666bfbd77-utilities\") pod \"redhat-marketplace-lz4c7\" (UID: \"c456f806-c830-4eac-bb7b-5c5666bfbd77\") " pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.767872 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwprv\" (UniqueName: \"kubernetes.io/projected/c456f806-c830-4eac-bb7b-5c5666bfbd77-kube-api-access-xwprv\") pod \"redhat-marketplace-lz4c7\" (UID: \"c456f806-c830-4eac-bb7b-5c5666bfbd77\") " pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.768273 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c456f806-c830-4eac-bb7b-5c5666bfbd77-catalog-content\") pod \"redhat-marketplace-lz4c7\" (UID: \"c456f806-c830-4eac-bb7b-5c5666bfbd77\") " pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.768531 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c456f806-c830-4eac-bb7b-5c5666bfbd77-utilities\") pod \"redhat-marketplace-lz4c7\" (UID: \"c456f806-c830-4eac-bb7b-5c5666bfbd77\") " pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:07 crc kubenswrapper[4725]: I1002 11:34:07.788606 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwprv\" (UniqueName: \"kubernetes.io/projected/c456f806-c830-4eac-bb7b-5c5666bfbd77-kube-api-access-xwprv\") pod \"redhat-marketplace-lz4c7\" (UID: \"c456f806-c830-4eac-bb7b-5c5666bfbd77\") " pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.065693 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.164302 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2tkn8"] Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.166146 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.167961 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.177773 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tkn8"] Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.272606 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81dcdac7-80b5-4b66-9620-1b4a9619b47b-catalog-content\") pod \"redhat-operators-2tkn8\" (UID: \"81dcdac7-80b5-4b66-9620-1b4a9619b47b\") " pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.272662 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81dcdac7-80b5-4b66-9620-1b4a9619b47b-utilities\") pod \"redhat-operators-2tkn8\" (UID: \"81dcdac7-80b5-4b66-9620-1b4a9619b47b\") " pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.272688 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5mxf\" (UniqueName: \"kubernetes.io/projected/81dcdac7-80b5-4b66-9620-1b4a9619b47b-kube-api-access-k5mxf\") pod \"redhat-operators-2tkn8\" (UID: \"81dcdac7-80b5-4b66-9620-1b4a9619b47b\") " pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.374270 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81dcdac7-80b5-4b66-9620-1b4a9619b47b-catalog-content\") pod \"redhat-operators-2tkn8\" (UID: \"81dcdac7-80b5-4b66-9620-1b4a9619b47b\") " pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.374332 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81dcdac7-80b5-4b66-9620-1b4a9619b47b-utilities\") pod \"redhat-operators-2tkn8\" (UID: \"81dcdac7-80b5-4b66-9620-1b4a9619b47b\") " pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.374369 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5mxf\" (UniqueName: \"kubernetes.io/projected/81dcdac7-80b5-4b66-9620-1b4a9619b47b-kube-api-access-k5mxf\") pod \"redhat-operators-2tkn8\" (UID: \"81dcdac7-80b5-4b66-9620-1b4a9619b47b\") " pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.375711 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81dcdac7-80b5-4b66-9620-1b4a9619b47b-catalog-content\") pod \"redhat-operators-2tkn8\" (UID: \"81dcdac7-80b5-4b66-9620-1b4a9619b47b\") " pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.375830 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81dcdac7-80b5-4b66-9620-1b4a9619b47b-utilities\") pod \"redhat-operators-2tkn8\" (UID: \"81dcdac7-80b5-4b66-9620-1b4a9619b47b\") " pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.395271 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5mxf\" (UniqueName: \"kubernetes.io/projected/81dcdac7-80b5-4b66-9620-1b4a9619b47b-kube-api-access-k5mxf\") pod \"redhat-operators-2tkn8\" (UID: \"81dcdac7-80b5-4b66-9620-1b4a9619b47b\") " pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.449529 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz4c7"] Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.489118 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.686956 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tkn8"] Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.708567 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz4c7" event={"ID":"c456f806-c830-4eac-bb7b-5c5666bfbd77","Type":"ContainerStarted","Data":"85b9bb08db8c61d8b0ec97f06cf88ebab380720b825b98b68d3c4016fae6f08d"} Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.708630 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz4c7" event={"ID":"c456f806-c830-4eac-bb7b-5c5666bfbd77","Type":"ContainerStarted","Data":"74da55860c4234b500acc8521d9244103ac1796307eb1bf72a4b03742c0367f9"} Oct 02 11:34:08 crc kubenswrapper[4725]: W1002 11:34:08.710503 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81dcdac7_80b5_4b66_9620_1b4a9619b47b.slice/crio-1f36c83a5b7ef4c12ac081900cd92dce56ff5bf7f0474f3bc9d3ad0d782ed101 WatchSource:0}: Error finding container 1f36c83a5b7ef4c12ac081900cd92dce56ff5bf7f0474f3bc9d3ad0d782ed101: Status 404 returned error can't find the container with id 1f36c83a5b7ef4c12ac081900cd92dce56ff5bf7f0474f3bc9d3ad0d782ed101 Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.716759 4725 generic.go:334] "Generic (PLEG): container finished" podID="2c0bff4f-9811-437e-9938-f5439d5a38b4" containerID="29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9" exitCode=0 Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.716820 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bwvh" event={"ID":"2c0bff4f-9811-437e-9938-f5439d5a38b4","Type":"ContainerDied","Data":"29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9"} Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.716854 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bwvh" event={"ID":"2c0bff4f-9811-437e-9938-f5439d5a38b4","Type":"ContainerStarted","Data":"cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f"} Oct 02 11:34:08 crc kubenswrapper[4725]: I1002 11:34:08.751459 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6bwvh" podStartSLOduration=1.956754616 podStartE2EDuration="3.751441245s" podCreationTimestamp="2025-10-02 11:34:05 +0000 UTC" firstStartedPulling="2025-10-02 11:34:06.672704743 +0000 UTC m=+366.580204206" lastFinishedPulling="2025-10-02 11:34:08.467391372 +0000 UTC m=+368.374890835" observedRunningTime="2025-10-02 11:34:08.747342702 +0000 UTC m=+368.654842175" watchObservedRunningTime="2025-10-02 11:34:08.751441245 +0000 UTC m=+368.658940698" Oct 02 11:34:09 crc kubenswrapper[4725]: I1002 11:34:09.725851 4725 generic.go:334] "Generic (PLEG): container finished" podID="81dcdac7-80b5-4b66-9620-1b4a9619b47b" containerID="6c77887f4367621ca09d92b719d5d888e02ead0f7d7a7fbab2afecf8438f5f37" exitCode=0 Oct 02 11:34:09 crc kubenswrapper[4725]: I1002 11:34:09.725900 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tkn8" event={"ID":"81dcdac7-80b5-4b66-9620-1b4a9619b47b","Type":"ContainerDied","Data":"6c77887f4367621ca09d92b719d5d888e02ead0f7d7a7fbab2afecf8438f5f37"} Oct 02 11:34:09 crc kubenswrapper[4725]: I1002 11:34:09.726509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tkn8" event={"ID":"81dcdac7-80b5-4b66-9620-1b4a9619b47b","Type":"ContainerStarted","Data":"1f36c83a5b7ef4c12ac081900cd92dce56ff5bf7f0474f3bc9d3ad0d782ed101"} Oct 02 11:34:09 crc kubenswrapper[4725]: I1002 11:34:09.729238 4725 generic.go:334] "Generic (PLEG): container finished" podID="c456f806-c830-4eac-bb7b-5c5666bfbd77" containerID="85b9bb08db8c61d8b0ec97f06cf88ebab380720b825b98b68d3c4016fae6f08d" exitCode=0 Oct 02 11:34:09 crc kubenswrapper[4725]: I1002 11:34:09.729304 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz4c7" event={"ID":"c456f806-c830-4eac-bb7b-5c5666bfbd77","Type":"ContainerDied","Data":"85b9bb08db8c61d8b0ec97f06cf88ebab380720b825b98b68d3c4016fae6f08d"} Oct 02 11:34:09 crc kubenswrapper[4725]: I1002 11:34:09.731776 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn4rg" event={"ID":"abe00bd7-b29e-4266-8a7d-64a79f500125","Type":"ContainerStarted","Data":"5304c6cee129601d67bf906f9fb5ac854c759382087ad44a7d032d9c92e19b11"} Oct 02 11:34:09 crc kubenswrapper[4725]: I1002 11:34:09.783914 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pn4rg" podStartSLOduration=2.578915841 podStartE2EDuration="4.78389696s" podCreationTimestamp="2025-10-02 11:34:05 +0000 UTC" firstStartedPulling="2025-10-02 11:34:06.669140336 +0000 UTC m=+366.576639799" lastFinishedPulling="2025-10-02 11:34:08.874121455 +0000 UTC m=+368.781620918" observedRunningTime="2025-10-02 11:34:09.783246803 +0000 UTC m=+369.690746276" watchObservedRunningTime="2025-10-02 11:34:09.78389696 +0000 UTC m=+369.691396423" Oct 02 11:34:11 crc kubenswrapper[4725]: I1002 11:34:11.744588 4725 generic.go:334] "Generic (PLEG): container finished" podID="c456f806-c830-4eac-bb7b-5c5666bfbd77" containerID="12166678ea945ba5019377e16de3138f6414db845023fcb4fefb13898d351e29" exitCode=0 Oct 02 11:34:11 crc kubenswrapper[4725]: I1002 11:34:11.744696 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz4c7" event={"ID":"c456f806-c830-4eac-bb7b-5c5666bfbd77","Type":"ContainerDied","Data":"12166678ea945ba5019377e16de3138f6414db845023fcb4fefb13898d351e29"} Oct 02 11:34:11 crc kubenswrapper[4725]: I1002 11:34:11.748177 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tkn8" event={"ID":"81dcdac7-80b5-4b66-9620-1b4a9619b47b","Type":"ContainerStarted","Data":"8e004f9e6c3635a01fa51bbd8f31e677238d69c6778f448677378163bdb1d688"} Oct 02 11:34:12 crc kubenswrapper[4725]: I1002 11:34:12.755317 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz4c7" event={"ID":"c456f806-c830-4eac-bb7b-5c5666bfbd77","Type":"ContainerStarted","Data":"d45204fa6dbdc2e83c1108b3b8037f8e1976e11b4568cb3d91daeb5a3fd684d6"} Oct 02 11:34:12 crc kubenswrapper[4725]: I1002 11:34:12.757696 4725 generic.go:334] "Generic (PLEG): container finished" podID="81dcdac7-80b5-4b66-9620-1b4a9619b47b" containerID="8e004f9e6c3635a01fa51bbd8f31e677238d69c6778f448677378163bdb1d688" exitCode=0 Oct 02 11:34:12 crc kubenswrapper[4725]: I1002 11:34:12.757748 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tkn8" event={"ID":"81dcdac7-80b5-4b66-9620-1b4a9619b47b","Type":"ContainerDied","Data":"8e004f9e6c3635a01fa51bbd8f31e677238d69c6778f448677378163bdb1d688"} Oct 02 11:34:12 crc kubenswrapper[4725]: I1002 11:34:12.774564 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lz4c7" podStartSLOduration=3.120157322 podStartE2EDuration="5.774546705s" podCreationTimestamp="2025-10-02 11:34:07 +0000 UTC" firstStartedPulling="2025-10-02 11:34:09.730453906 +0000 UTC m=+369.637953369" lastFinishedPulling="2025-10-02 11:34:12.384843289 +0000 UTC m=+372.292342752" observedRunningTime="2025-10-02 11:34:12.772429058 +0000 UTC m=+372.679928541" watchObservedRunningTime="2025-10-02 11:34:12.774546705 +0000 UTC m=+372.682046168" Oct 02 11:34:13 crc kubenswrapper[4725]: I1002 11:34:13.765681 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tkn8" event={"ID":"81dcdac7-80b5-4b66-9620-1b4a9619b47b","Type":"ContainerStarted","Data":"390c48fccbb044a074ed78a1dfa60eddd5267397b36bd47bc0cfb3c5ba5baee6"} Oct 02 11:34:13 crc kubenswrapper[4725]: I1002 11:34:13.790240 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2tkn8" podStartSLOduration=2.176829766 podStartE2EDuration="5.790222732s" podCreationTimestamp="2025-10-02 11:34:08 +0000 UTC" firstStartedPulling="2025-10-02 11:34:09.727553237 +0000 UTC m=+369.635052700" lastFinishedPulling="2025-10-02 11:34:13.340946203 +0000 UTC m=+373.248445666" observedRunningTime="2025-10-02 11:34:13.786919041 +0000 UTC m=+373.694418514" watchObservedRunningTime="2025-10-02 11:34:13.790222732 +0000 UTC m=+373.697722195" Oct 02 11:34:14 crc kubenswrapper[4725]: I1002 11:34:14.978303 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:34:14 crc kubenswrapper[4725]: I1002 11:34:14.978374 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:34:15 crc kubenswrapper[4725]: I1002 11:34:15.481808 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:15 crc kubenswrapper[4725]: I1002 11:34:15.482643 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:15 crc kubenswrapper[4725]: I1002 11:34:15.531716 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:15 crc kubenswrapper[4725]: I1002 11:34:15.812149 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 11:34:16 crc kubenswrapper[4725]: I1002 11:34:16.090448 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:16 crc kubenswrapper[4725]: I1002 11:34:16.090528 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:16 crc kubenswrapper[4725]: I1002 11:34:16.134457 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:16 crc kubenswrapper[4725]: I1002 11:34:16.819426 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pn4rg" Oct 02 11:34:18 crc kubenswrapper[4725]: I1002 11:34:18.066228 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:18 crc kubenswrapper[4725]: I1002 11:34:18.066292 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:18 crc kubenswrapper[4725]: I1002 11:34:18.138213 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:18 crc kubenswrapper[4725]: I1002 11:34:18.490510 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:18 crc kubenswrapper[4725]: I1002 11:34:18.490572 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:18 crc kubenswrapper[4725]: I1002 11:34:18.531571 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:18 crc kubenswrapper[4725]: I1002 11:34:18.830412 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lz4c7" Oct 02 11:34:18 crc kubenswrapper[4725]: I1002 11:34:18.831588 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2tkn8" Oct 02 11:34:44 crc kubenswrapper[4725]: I1002 11:34:44.978275 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:34:44 crc kubenswrapper[4725]: I1002 11:34:44.978561 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:35:14 crc kubenswrapper[4725]: I1002 11:35:14.978408 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:35:14 crc kubenswrapper[4725]: I1002 11:35:14.978890 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:35:14 crc kubenswrapper[4725]: I1002 11:35:14.978930 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:35:14 crc kubenswrapper[4725]: I1002 11:35:14.979355 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7190a4574cd538805fe0f0cfce38011b273a9f7cd55ebf4c6d774d8588b6c6e7"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 11:35:14 crc kubenswrapper[4725]: I1002 11:35:14.979454 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://7190a4574cd538805fe0f0cfce38011b273a9f7cd55ebf4c6d774d8588b6c6e7" gracePeriod=600 Oct 02 11:35:16 crc kubenswrapper[4725]: I1002 11:35:16.080557 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="7190a4574cd538805fe0f0cfce38011b273a9f7cd55ebf4c6d774d8588b6c6e7" exitCode=0 Oct 02 11:35:16 crc kubenswrapper[4725]: I1002 11:35:16.080963 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"7190a4574cd538805fe0f0cfce38011b273a9f7cd55ebf4c6d774d8588b6c6e7"} Oct 02 11:35:16 crc kubenswrapper[4725]: I1002 11:35:16.081002 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"2ab06ecdc9e3ce4554f8dc8a83e20bbb77b295847c13387ba2971499da7ed2c3"} Oct 02 11:35:16 crc kubenswrapper[4725]: I1002 11:35:16.081025 4725 scope.go:117] "RemoveContainer" containerID="c2179162c73137db2a6be71dcfa748bcf5d3d0fa3f5646dbecea12ac843b2fa2" Oct 02 11:35:19 crc kubenswrapper[4725]: I1002 11:35:19.974677 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ftpjz"] Oct 02 11:35:19 crc kubenswrapper[4725]: I1002 11:35:19.976303 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:19 crc kubenswrapper[4725]: I1002 11:35:19.996216 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ftpjz"] Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.135960 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.136042 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90911f29-d7c7-440d-a587-a12fe78cc479-registry-tls\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.136073 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90911f29-d7c7-440d-a587-a12fe78cc479-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.136088 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90911f29-d7c7-440d-a587-a12fe78cc479-trusted-ca\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.136121 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpz6s\" (UniqueName: \"kubernetes.io/projected/90911f29-d7c7-440d-a587-a12fe78cc479-kube-api-access-kpz6s\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.136138 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90911f29-d7c7-440d-a587-a12fe78cc479-registry-certificates\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.136395 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90911f29-d7c7-440d-a587-a12fe78cc479-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.136606 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90911f29-d7c7-440d-a587-a12fe78cc479-bound-sa-token\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.158184 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.238485 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpz6s\" (UniqueName: \"kubernetes.io/projected/90911f29-d7c7-440d-a587-a12fe78cc479-kube-api-access-kpz6s\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.238579 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90911f29-d7c7-440d-a587-a12fe78cc479-registry-certificates\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.238661 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90911f29-d7c7-440d-a587-a12fe78cc479-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.238772 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90911f29-d7c7-440d-a587-a12fe78cc479-bound-sa-token\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.239634 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90911f29-d7c7-440d-a587-a12fe78cc479-registry-tls\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.239781 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90911f29-d7c7-440d-a587-a12fe78cc479-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.239829 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90911f29-d7c7-440d-a587-a12fe78cc479-trusted-ca\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.240152 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90911f29-d7c7-440d-a587-a12fe78cc479-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.240466 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90911f29-d7c7-440d-a587-a12fe78cc479-registry-certificates\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.241508 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90911f29-d7c7-440d-a587-a12fe78cc479-trusted-ca\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.247600 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90911f29-d7c7-440d-a587-a12fe78cc479-registry-tls\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.247784 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90911f29-d7c7-440d-a587-a12fe78cc479-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.259452 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpz6s\" (UniqueName: \"kubernetes.io/projected/90911f29-d7c7-440d-a587-a12fe78cc479-kube-api-access-kpz6s\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.262965 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90911f29-d7c7-440d-a587-a12fe78cc479-bound-sa-token\") pod \"image-registry-66df7c8f76-ftpjz\" (UID: \"90911f29-d7c7-440d-a587-a12fe78cc479\") " pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.308671 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:20 crc kubenswrapper[4725]: I1002 11:35:20.517436 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ftpjz"] Oct 02 11:35:21 crc kubenswrapper[4725]: I1002 11:35:21.117421 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" event={"ID":"90911f29-d7c7-440d-a587-a12fe78cc479","Type":"ContainerStarted","Data":"8683c8b049f3d71e1fa2b3bcce46868a2761436ae9041aae3093ba1e523d921d"} Oct 02 11:35:21 crc kubenswrapper[4725]: I1002 11:35:21.117846 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" event={"ID":"90911f29-d7c7-440d-a587-a12fe78cc479","Type":"ContainerStarted","Data":"9a8e968d602745410724c3493380067dffed7aba37b636556234ed2656288061"} Oct 02 11:35:21 crc kubenswrapper[4725]: I1002 11:35:21.117876 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:21 crc kubenswrapper[4725]: I1002 11:35:21.159472 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" podStartSLOduration=2.159440994 podStartE2EDuration="2.159440994s" podCreationTimestamp="2025-10-02 11:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:35:21.153147325 +0000 UTC m=+441.060646818" watchObservedRunningTime="2025-10-02 11:35:21.159440994 +0000 UTC m=+441.066940497" Oct 02 11:35:40 crc kubenswrapper[4725]: I1002 11:35:40.323645 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-ftpjz" Oct 02 11:35:40 crc kubenswrapper[4725]: I1002 11:35:40.393846 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99xdm"] Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.441258 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" podUID="ad9fc733-da08-4b08-a234-f85424ed53cb" containerName="registry" containerID="cri-o://84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57" gracePeriod=30 Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.809116 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.875276 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ad9fc733-da08-4b08-a234-f85424ed53cb-installation-pull-secrets\") pod \"ad9fc733-da08-4b08-a234-f85424ed53cb\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.875324 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-certificates\") pod \"ad9fc733-da08-4b08-a234-f85424ed53cb\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.875524 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ad9fc733-da08-4b08-a234-f85424ed53cb\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.875566 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ad9fc733-da08-4b08-a234-f85424ed53cb-ca-trust-extracted\") pod \"ad9fc733-da08-4b08-a234-f85424ed53cb\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.875592 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-trusted-ca\") pod \"ad9fc733-da08-4b08-a234-f85424ed53cb\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.875623 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-bound-sa-token\") pod \"ad9fc733-da08-4b08-a234-f85424ed53cb\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.875652 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvrkm\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-kube-api-access-tvrkm\") pod \"ad9fc733-da08-4b08-a234-f85424ed53cb\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.875674 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-tls\") pod \"ad9fc733-da08-4b08-a234-f85424ed53cb\" (UID: \"ad9fc733-da08-4b08-a234-f85424ed53cb\") " Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.876277 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ad9fc733-da08-4b08-a234-f85424ed53cb" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.876379 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ad9fc733-da08-4b08-a234-f85424ed53cb" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.881325 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad9fc733-da08-4b08-a234-f85424ed53cb-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ad9fc733-da08-4b08-a234-f85424ed53cb" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.881432 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ad9fc733-da08-4b08-a234-f85424ed53cb" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.881935 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-kube-api-access-tvrkm" (OuterVolumeSpecName: "kube-api-access-tvrkm") pod "ad9fc733-da08-4b08-a234-f85424ed53cb" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb"). InnerVolumeSpecName "kube-api-access-tvrkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.883067 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ad9fc733-da08-4b08-a234-f85424ed53cb" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.887936 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ad9fc733-da08-4b08-a234-f85424ed53cb" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.891752 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad9fc733-da08-4b08-a234-f85424ed53cb-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ad9fc733-da08-4b08-a234-f85424ed53cb" (UID: "ad9fc733-da08-4b08-a234-f85424ed53cb"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.977333 4725 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ad9fc733-da08-4b08-a234-f85424ed53cb-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.977394 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.977410 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.977424 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvrkm\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-kube-api-access-tvrkm\") on node \"crc\" DevicePath \"\"" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.977438 4725 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.977449 4725 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ad9fc733-da08-4b08-a234-f85424ed53cb-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 02 11:36:05 crc kubenswrapper[4725]: I1002 11:36:05.977487 4725 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ad9fc733-da08-4b08-a234-f85424ed53cb-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 02 11:36:06 crc kubenswrapper[4725]: I1002 11:36:06.389756 4725 generic.go:334] "Generic (PLEG): container finished" podID="ad9fc733-da08-4b08-a234-f85424ed53cb" containerID="84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57" exitCode=0 Oct 02 11:36:06 crc kubenswrapper[4725]: I1002 11:36:06.389834 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" event={"ID":"ad9fc733-da08-4b08-a234-f85424ed53cb","Type":"ContainerDied","Data":"84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57"} Oct 02 11:36:06 crc kubenswrapper[4725]: I1002 11:36:06.389973 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" Oct 02 11:36:06 crc kubenswrapper[4725]: I1002 11:36:06.390581 4725 scope.go:117] "RemoveContainer" containerID="84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57" Oct 02 11:36:06 crc kubenswrapper[4725]: I1002 11:36:06.390493 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-99xdm" event={"ID":"ad9fc733-da08-4b08-a234-f85424ed53cb","Type":"ContainerDied","Data":"d1d8489f0174bbc5cad13e62a224fac76c52653e4e02843d60fc9ac1d50d4daf"} Oct 02 11:36:06 crc kubenswrapper[4725]: I1002 11:36:06.412058 4725 scope.go:117] "RemoveContainer" containerID="84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57" Oct 02 11:36:06 crc kubenswrapper[4725]: E1002 11:36:06.412766 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57\": container with ID starting with 84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57 not found: ID does not exist" containerID="84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57" Oct 02 11:36:06 crc kubenswrapper[4725]: I1002 11:36:06.412806 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57"} err="failed to get container status \"84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57\": rpc error: code = NotFound desc = could not find container \"84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57\": container with ID starting with 84479e50554fbeedb538eb398d3c84f2b49b060fe7f4a2bca04db02b08105f57 not found: ID does not exist" Oct 02 11:36:06 crc kubenswrapper[4725]: I1002 11:36:06.423059 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99xdm"] Oct 02 11:36:06 crc kubenswrapper[4725]: I1002 11:36:06.430954 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99xdm"] Oct 02 11:36:07 crc kubenswrapper[4725]: I1002 11:36:07.276050 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad9fc733-da08-4b08-a234-f85424ed53cb" path="/var/lib/kubelet/pods/ad9fc733-da08-4b08-a234-f85424ed53cb/volumes" Oct 02 11:37:44 crc kubenswrapper[4725]: I1002 11:37:44.978749 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:37:44 crc kubenswrapper[4725]: I1002 11:37:44.979414 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:38:14 crc kubenswrapper[4725]: I1002 11:38:14.978233 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:38:14 crc kubenswrapper[4725]: I1002 11:38:14.979055 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:38:44 crc kubenswrapper[4725]: I1002 11:38:44.978761 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:38:44 crc kubenswrapper[4725]: I1002 11:38:44.979503 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:38:44 crc kubenswrapper[4725]: I1002 11:38:44.979574 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:38:44 crc kubenswrapper[4725]: I1002 11:38:44.980527 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ab06ecdc9e3ce4554f8dc8a83e20bbb77b295847c13387ba2971499da7ed2c3"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 11:38:44 crc kubenswrapper[4725]: I1002 11:38:44.980625 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://2ab06ecdc9e3ce4554f8dc8a83e20bbb77b295847c13387ba2971499da7ed2c3" gracePeriod=600 Oct 02 11:38:45 crc kubenswrapper[4725]: I1002 11:38:45.401459 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="2ab06ecdc9e3ce4554f8dc8a83e20bbb77b295847c13387ba2971499da7ed2c3" exitCode=0 Oct 02 11:38:45 crc kubenswrapper[4725]: I1002 11:38:45.401852 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"2ab06ecdc9e3ce4554f8dc8a83e20bbb77b295847c13387ba2971499da7ed2c3"} Oct 02 11:38:45 crc kubenswrapper[4725]: I1002 11:38:45.401891 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"0c35ab14b7cbe7958f83e521978b2a31e1c8daa5f2440954a4dc746628c5674e"} Oct 02 11:38:45 crc kubenswrapper[4725]: I1002 11:38:45.401913 4725 scope.go:117] "RemoveContainer" containerID="7190a4574cd538805fe0f0cfce38011b273a9f7cd55ebf4c6d774d8588b6c6e7" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.260359 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-smqhv"] Oct 02 11:39:42 crc kubenswrapper[4725]: E1002 11:39:42.261118 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9fc733-da08-4b08-a234-f85424ed53cb" containerName="registry" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.261130 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9fc733-da08-4b08-a234-f85424ed53cb" containerName="registry" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.261219 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9fc733-da08-4b08-a234-f85424ed53cb" containerName="registry" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.261552 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-smqhv" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.264383 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.264873 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xjcrp" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.268030 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-pjgh6"] Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.268558 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.268991 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-pjgh6" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.270473 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-l8xrm" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.273842 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-smqhv"] Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.281491 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-pjgh6"] Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.290101 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-wmnqx"] Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.290753 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-wmnqx" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.294637 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-44z8g" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.316053 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-wmnqx"] Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.429771 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-545q6\" (UniqueName: \"kubernetes.io/projected/c29a13f6-36a6-49c4-b16d-df2fdcda469b-kube-api-access-545q6\") pod \"cert-manager-cainjector-7f985d654d-smqhv\" (UID: \"c29a13f6-36a6-49c4-b16d-df2fdcda469b\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-smqhv" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.429917 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6l5l\" (UniqueName: \"kubernetes.io/projected/df4f98fb-c0cd-4e39-b9fd-ec6500f6644e-kube-api-access-t6l5l\") pod \"cert-manager-5b446d88c5-pjgh6\" (UID: \"df4f98fb-c0cd-4e39-b9fd-ec6500f6644e\") " pod="cert-manager/cert-manager-5b446d88c5-pjgh6" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.430025 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhdfr\" (UniqueName: \"kubernetes.io/projected/c3b045dd-0aa9-4c6b-8930-16c6ae444847-kube-api-access-nhdfr\") pod \"cert-manager-webhook-5655c58dd6-wmnqx\" (UID: \"c3b045dd-0aa9-4c6b-8930-16c6ae444847\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-wmnqx" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.530942 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6l5l\" (UniqueName: \"kubernetes.io/projected/df4f98fb-c0cd-4e39-b9fd-ec6500f6644e-kube-api-access-t6l5l\") pod \"cert-manager-5b446d88c5-pjgh6\" (UID: \"df4f98fb-c0cd-4e39-b9fd-ec6500f6644e\") " pod="cert-manager/cert-manager-5b446d88c5-pjgh6" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.531013 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhdfr\" (UniqueName: \"kubernetes.io/projected/c3b045dd-0aa9-4c6b-8930-16c6ae444847-kube-api-access-nhdfr\") pod \"cert-manager-webhook-5655c58dd6-wmnqx\" (UID: \"c3b045dd-0aa9-4c6b-8930-16c6ae444847\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-wmnqx" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.531082 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-545q6\" (UniqueName: \"kubernetes.io/projected/c29a13f6-36a6-49c4-b16d-df2fdcda469b-kube-api-access-545q6\") pod \"cert-manager-cainjector-7f985d654d-smqhv\" (UID: \"c29a13f6-36a6-49c4-b16d-df2fdcda469b\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-smqhv" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.552033 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-545q6\" (UniqueName: \"kubernetes.io/projected/c29a13f6-36a6-49c4-b16d-df2fdcda469b-kube-api-access-545q6\") pod \"cert-manager-cainjector-7f985d654d-smqhv\" (UID: \"c29a13f6-36a6-49c4-b16d-df2fdcda469b\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-smqhv" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.554035 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhdfr\" (UniqueName: \"kubernetes.io/projected/c3b045dd-0aa9-4c6b-8930-16c6ae444847-kube-api-access-nhdfr\") pod \"cert-manager-webhook-5655c58dd6-wmnqx\" (UID: \"c3b045dd-0aa9-4c6b-8930-16c6ae444847\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-wmnqx" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.558842 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6l5l\" (UniqueName: \"kubernetes.io/projected/df4f98fb-c0cd-4e39-b9fd-ec6500f6644e-kube-api-access-t6l5l\") pod \"cert-manager-5b446d88c5-pjgh6\" (UID: \"df4f98fb-c0cd-4e39-b9fd-ec6500f6644e\") " pod="cert-manager/cert-manager-5b446d88c5-pjgh6" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.583406 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-smqhv" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.590342 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-pjgh6" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.606090 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-wmnqx" Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.792714 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-pjgh6"] Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.803750 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 11:39:42 crc kubenswrapper[4725]: I1002 11:39:42.828640 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-smqhv"] Oct 02 11:39:42 crc kubenswrapper[4725]: W1002 11:39:42.831245 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc29a13f6_36a6_49c4_b16d_df2fdcda469b.slice/crio-3a4d4be02af8cd7c63040d71f5a0c431af0b8c0e637b5c184eadca0322844c0f WatchSource:0}: Error finding container 3a4d4be02af8cd7c63040d71f5a0c431af0b8c0e637b5c184eadca0322844c0f: Status 404 returned error can't find the container with id 3a4d4be02af8cd7c63040d71f5a0c431af0b8c0e637b5c184eadca0322844c0f Oct 02 11:39:43 crc kubenswrapper[4725]: I1002 11:39:43.070358 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-wmnqx"] Oct 02 11:39:43 crc kubenswrapper[4725]: W1002 11:39:43.075865 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3b045dd_0aa9_4c6b_8930_16c6ae444847.slice/crio-ab8214c6470821c94dee7e023ffd4195c52935acb57fb847d5cd579860f18ffc WatchSource:0}: Error finding container ab8214c6470821c94dee7e023ffd4195c52935acb57fb847d5cd579860f18ffc: Status 404 returned error can't find the container with id ab8214c6470821c94dee7e023ffd4195c52935acb57fb847d5cd579860f18ffc Oct 02 11:39:43 crc kubenswrapper[4725]: I1002 11:39:43.755756 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-smqhv" event={"ID":"c29a13f6-36a6-49c4-b16d-df2fdcda469b","Type":"ContainerStarted","Data":"3a4d4be02af8cd7c63040d71f5a0c431af0b8c0e637b5c184eadca0322844c0f"} Oct 02 11:39:43 crc kubenswrapper[4725]: I1002 11:39:43.757024 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-wmnqx" event={"ID":"c3b045dd-0aa9-4c6b-8930-16c6ae444847","Type":"ContainerStarted","Data":"ab8214c6470821c94dee7e023ffd4195c52935acb57fb847d5cd579860f18ffc"} Oct 02 11:39:43 crc kubenswrapper[4725]: I1002 11:39:43.757872 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-pjgh6" event={"ID":"df4f98fb-c0cd-4e39-b9fd-ec6500f6644e","Type":"ContainerStarted","Data":"be32c7adebedf7e6db06a916963e228f6ff303876a310aea9da4634cac3f88b1"} Oct 02 11:39:45 crc kubenswrapper[4725]: I1002 11:39:45.768146 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-pjgh6" event={"ID":"df4f98fb-c0cd-4e39-b9fd-ec6500f6644e","Type":"ContainerStarted","Data":"d526d8b978ddda0d8cb553a2530df5edd864a94f6653cb139822d1d285221c83"} Oct 02 11:39:45 crc kubenswrapper[4725]: I1002 11:39:45.769707 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-smqhv" event={"ID":"c29a13f6-36a6-49c4-b16d-df2fdcda469b","Type":"ContainerStarted","Data":"e68bc1517846d7a5a33155880cbfd63fbe2201a3f583ff7663775ed0ece88e8a"} Oct 02 11:39:45 crc kubenswrapper[4725]: I1002 11:39:45.781859 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-pjgh6" podStartSLOduration=0.999829982 podStartE2EDuration="3.781840238s" podCreationTimestamp="2025-10-02 11:39:42 +0000 UTC" firstStartedPulling="2025-10-02 11:39:42.803499216 +0000 UTC m=+702.710998679" lastFinishedPulling="2025-10-02 11:39:45.585509472 +0000 UTC m=+705.493008935" observedRunningTime="2025-10-02 11:39:45.781101639 +0000 UTC m=+705.688601102" watchObservedRunningTime="2025-10-02 11:39:45.781840238 +0000 UTC m=+705.689339701" Oct 02 11:39:46 crc kubenswrapper[4725]: I1002 11:39:46.776358 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-wmnqx" event={"ID":"c3b045dd-0aa9-4c6b-8930-16c6ae444847","Type":"ContainerStarted","Data":"84efff5c294da2844e638e1f5b2323d261c75059848cba57ea37d94868ba0a40"} Oct 02 11:39:46 crc kubenswrapper[4725]: I1002 11:39:46.793771 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-wmnqx" podStartSLOduration=1.41215294 podStartE2EDuration="4.79370901s" podCreationTimestamp="2025-10-02 11:39:42 +0000 UTC" firstStartedPulling="2025-10-02 11:39:43.077830853 +0000 UTC m=+702.985330316" lastFinishedPulling="2025-10-02 11:39:46.459386903 +0000 UTC m=+706.366886386" observedRunningTime="2025-10-02 11:39:46.793335449 +0000 UTC m=+706.700834912" watchObservedRunningTime="2025-10-02 11:39:46.79370901 +0000 UTC m=+706.701208493" Oct 02 11:39:46 crc kubenswrapper[4725]: I1002 11:39:46.796426 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-smqhv" podStartSLOduration=2.129388049 podStartE2EDuration="4.796410601s" podCreationTimestamp="2025-10-02 11:39:42 +0000 UTC" firstStartedPulling="2025-10-02 11:39:42.832637762 +0000 UTC m=+702.740137225" lastFinishedPulling="2025-10-02 11:39:45.499660314 +0000 UTC m=+705.407159777" observedRunningTime="2025-10-02 11:39:45.803134428 +0000 UTC m=+705.710633891" watchObservedRunningTime="2025-10-02 11:39:46.796410601 +0000 UTC m=+706.703910074" Oct 02 11:39:47 crc kubenswrapper[4725]: I1002 11:39:47.606886 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-wmnqx" Oct 02 11:39:52 crc kubenswrapper[4725]: I1002 11:39:52.611076 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-wmnqx" Oct 02 11:39:52 crc kubenswrapper[4725]: I1002 11:39:52.669331 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c2hv"] Oct 02 11:39:52 crc kubenswrapper[4725]: I1002 11:39:52.821114 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovn-controller" containerID="cri-o://be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c" gracePeriod=30 Oct 02 11:39:52 crc kubenswrapper[4725]: I1002 11:39:52.821187 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="nbdb" containerID="cri-o://e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388" gracePeriod=30 Oct 02 11:39:52 crc kubenswrapper[4725]: I1002 11:39:52.821244 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="northd" containerID="cri-o://196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151" gracePeriod=30 Oct 02 11:39:52 crc kubenswrapper[4725]: I1002 11:39:52.821310 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovn-acl-logging" containerID="cri-o://ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a" gracePeriod=30 Oct 02 11:39:52 crc kubenswrapper[4725]: I1002 11:39:52.821352 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="kube-rbac-proxy-node" containerID="cri-o://56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519" gracePeriod=30 Oct 02 11:39:52 crc kubenswrapper[4725]: I1002 11:39:52.821439 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="sbdb" containerID="cri-o://06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44" gracePeriod=30 Oct 02 11:39:52 crc kubenswrapper[4725]: I1002 11:39:52.821354 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732" gracePeriod=30 Oct 02 11:39:52 crc kubenswrapper[4725]: I1002 11:39:52.862994 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" containerID="cri-o://666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92" gracePeriod=30 Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.118221 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/3.log" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.122845 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovn-acl-logging/0.log" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.123875 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovn-controller/0.log" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.124513 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167307 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-var-lib-openvswitch\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167390 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-netns\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167443 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-openvswitch\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167483 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-log-socket\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167517 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-etc-openvswitch\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167548 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-ovn\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167588 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-config\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167627 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-systemd\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167675 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-ovn-kubernetes\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167714 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-slash\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167773 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-node-log\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167824 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovn-node-metrics-cert\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167861 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-script-lib\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167900 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-env-overrides\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167940 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-netd\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.167975 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcrdd\" (UniqueName: \"kubernetes.io/projected/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-kube-api-access-bcrdd\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.169703 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.169815 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.169857 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.169892 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.169927 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-log-socket" (OuterVolumeSpecName: "log-socket") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.168008 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.169979 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-systemd-units\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.170018 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-kubelet\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.170051 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-bin\") pod \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\" (UID: \"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2\") " Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.170415 4725 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.170441 4725 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.170461 4725 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.170477 4725 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.170493 4725 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-log-socket\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.170541 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.170581 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.170616 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.171489 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.172090 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.172130 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.172182 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.172929 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.172995 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.172952 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-slash" (OuterVolumeSpecName: "host-slash") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173038 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-node-log" (OuterVolumeSpecName: "node-log") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173090 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173523 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c4r2t"] Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173734 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="northd" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173745 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="northd" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173755 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173761 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173774 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="kube-rbac-proxy-ovn-metrics" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173781 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="kube-rbac-proxy-ovn-metrics" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173790 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173795 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173802 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="nbdb" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173807 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="nbdb" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173814 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173820 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173827 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="kube-rbac-proxy-node" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173832 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="kube-rbac-proxy-node" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173839 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173844 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173851 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovn-acl-logging" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173857 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovn-acl-logging" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173865 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="sbdb" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173870 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="sbdb" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173877 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovn-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173883 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovn-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.173892 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="kubecfg-setup" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173897 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="kubecfg-setup" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173975 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173984 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovn-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.173993 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="kube-rbac-proxy-ovn-metrics" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.174001 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.174009 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="northd" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.174017 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="kube-rbac-proxy-node" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.174026 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="sbdb" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.174033 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovn-acl-logging" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.174042 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="nbdb" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.174050 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.174132 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.174140 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.174232 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.174241 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerName="ovnkube-controller" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.175700 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.182891 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.183053 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-kube-api-access-bcrdd" (OuterVolumeSpecName: "kube-api-access-bcrdd") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "kube-api-access-bcrdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.198186 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" (UID: "d6cd2823-e7fc-454e-9ec2-e3dcc81472e2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.271531 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-systemd-units\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.271619 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqjl8\" (UniqueName: \"kubernetes.io/projected/3b9d6183-2d36-4b04-9850-bed46c031965-kube-api-access-sqjl8\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.271686 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-cni-netd\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.271786 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-run-ovn\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.271822 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-cni-bin\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.271854 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-run-openvswitch\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.271922 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9d6183-2d36-4b04-9850-bed46c031965-ovn-node-metrics-cert\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.271961 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-run-netns\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272004 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-kubelet\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272038 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-node-log\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272076 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-etc-openvswitch\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272139 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9d6183-2d36-4b04-9850-bed46c031965-ovnkube-script-lib\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272182 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9d6183-2d36-4b04-9850-bed46c031965-env-overrides\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272216 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-run-systemd\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272249 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9d6183-2d36-4b04-9850-bed46c031965-ovnkube-config\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272287 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272330 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-var-lib-openvswitch\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272382 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-slash\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272431 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-log-socket\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272473 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-run-ovn-kubernetes\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272607 4725 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272635 4725 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272662 4725 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272684 4725 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272707 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272768 4725 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272801 4725 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272826 4725 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-slash\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272849 4725 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-node-log\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272890 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272912 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272933 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272954 4725 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.272978 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcrdd\" (UniqueName: \"kubernetes.io/projected/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-kube-api-access-bcrdd\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.273001 4725 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374049 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374144 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-var-lib-openvswitch\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374194 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374202 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-slash\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374280 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-slash\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374296 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-log-socket\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374334 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-log-socket\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374362 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-run-ovn-kubernetes\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374399 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-var-lib-openvswitch\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374451 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-systemd-units\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374413 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-systemd-units\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374481 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-run-ovn-kubernetes\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374545 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqjl8\" (UniqueName: \"kubernetes.io/projected/3b9d6183-2d36-4b04-9850-bed46c031965-kube-api-access-sqjl8\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374629 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-cni-netd\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374688 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-run-ovn\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374711 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-cni-netd\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374793 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-cni-bin\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374833 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-run-openvswitch\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374870 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9d6183-2d36-4b04-9850-bed46c031965-ovn-node-metrics-cert\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374903 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-cni-bin\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374913 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-run-netns\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374971 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-kubelet\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.375014 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-node-log\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.375104 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-etc-openvswitch\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.375147 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-run-openvswitch\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.375181 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9d6183-2d36-4b04-9850-bed46c031965-ovnkube-script-lib\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.375196 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-kubelet\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.374873 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-run-ovn\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.375242 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9d6183-2d36-4b04-9850-bed46c031965-env-overrides\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.375819 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-etc-openvswitch\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.375877 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-host-run-netns\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.375880 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-run-systemd\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.376000 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-run-systemd\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.376077 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9d6183-2d36-4b04-9850-bed46c031965-node-log\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.376086 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9d6183-2d36-4b04-9850-bed46c031965-ovnkube-config\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.376158 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9d6183-2d36-4b04-9850-bed46c031965-ovnkube-script-lib\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.376262 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9d6183-2d36-4b04-9850-bed46c031965-env-overrides\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.377171 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9d6183-2d36-4b04-9850-bed46c031965-ovnkube-config\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.380011 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9d6183-2d36-4b04-9850-bed46c031965-ovn-node-metrics-cert\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.396999 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqjl8\" (UniqueName: \"kubernetes.io/projected/3b9d6183-2d36-4b04-9850-bed46c031965-kube-api-access-sqjl8\") pod \"ovnkube-node-c4r2t\" (UID: \"3b9d6183-2d36-4b04-9850-bed46c031965\") " pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.514199 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.829427 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2q2jl_15fc62f2-0a7e-477c-8e35-0888c40e2d6c/kube-multus/2.log" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.830582 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2q2jl_15fc62f2-0a7e-477c-8e35-0888c40e2d6c/kube-multus/1.log" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.830661 4725 generic.go:334] "Generic (PLEG): container finished" podID="15fc62f2-0a7e-477c-8e35-0888c40e2d6c" containerID="e96bdcb9d80476d1369450fc2610306d82c9ba2ad06315db71d6c29ff0f76cb8" exitCode=2 Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.830805 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2q2jl" event={"ID":"15fc62f2-0a7e-477c-8e35-0888c40e2d6c","Type":"ContainerDied","Data":"e96bdcb9d80476d1369450fc2610306d82c9ba2ad06315db71d6c29ff0f76cb8"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.830879 4725 scope.go:117] "RemoveContainer" containerID="81bb0c1d25c8b1ec5479bb62703db4a7d3033c94da2dbf325cf142989f1d040e" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.831659 4725 scope.go:117] "RemoveContainer" containerID="e96bdcb9d80476d1369450fc2610306d82c9ba2ad06315db71d6c29ff0f76cb8" Oct 02 11:39:53 crc kubenswrapper[4725]: E1002 11:39:53.832063 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2q2jl_openshift-multus(15fc62f2-0a7e-477c-8e35-0888c40e2d6c)\"" pod="openshift-multus/multus-2q2jl" podUID="15fc62f2-0a7e-477c-8e35-0888c40e2d6c" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.834403 4725 generic.go:334] "Generic (PLEG): container finished" podID="3b9d6183-2d36-4b04-9850-bed46c031965" containerID="a37e0c9cf69390bd3f7a3a689c24d14febff3773e67371ae5eb4f37260cbbb28" exitCode=0 Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.834531 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" event={"ID":"3b9d6183-2d36-4b04-9850-bed46c031965","Type":"ContainerDied","Data":"a37e0c9cf69390bd3f7a3a689c24d14febff3773e67371ae5eb4f37260cbbb28"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.834599 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" event={"ID":"3b9d6183-2d36-4b04-9850-bed46c031965","Type":"ContainerStarted","Data":"61814c0098ce636fef20d31c755671072e150ca87a52fae5984b989fb236c678"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.841082 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovnkube-controller/3.log" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.844466 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovn-acl-logging/0.log" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845163 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c2hv_d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/ovn-controller/0.log" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845614 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92" exitCode=0 Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845658 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44" exitCode=0 Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845673 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388" exitCode=0 Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845688 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151" exitCode=0 Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845701 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732" exitCode=0 Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845713 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519" exitCode=0 Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845766 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a" exitCode=143 Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845780 4725 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" containerID="be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c" exitCode=143 Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845814 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845853 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845877 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845899 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845918 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845938 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845959 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845976 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845988 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.845998 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846009 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846074 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846087 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846100 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846111 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846121 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846136 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846156 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846168 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846179 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846190 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846201 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846212 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846223 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846234 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846244 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846255 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846269 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846290 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846301 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846312 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846322 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846332 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846342 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846352 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846363 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846374 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846384 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846398 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" event={"ID":"d6cd2823-e7fc-454e-9ec2-e3dcc81472e2","Type":"ContainerDied","Data":"6fa823a6e770e3dfa08d4bb4e5fc93b51d7e315727f6038522dedf28ea806a42"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846413 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846425 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846436 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846446 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846456 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846466 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846476 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846486 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846512 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846565 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39"} Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.846755 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c2hv" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.876295 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c2hv"] Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.877995 4725 scope.go:117] "RemoveContainer" containerID="666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.882091 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c2hv"] Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.913781 4725 scope.go:117] "RemoveContainer" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.937496 4725 scope.go:117] "RemoveContainer" containerID="06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.974506 4725 scope.go:117] "RemoveContainer" containerID="e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388" Oct 02 11:39:53 crc kubenswrapper[4725]: I1002 11:39:53.990048 4725 scope.go:117] "RemoveContainer" containerID="196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.009233 4725 scope.go:117] "RemoveContainer" containerID="f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.030340 4725 scope.go:117] "RemoveContainer" containerID="56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.045927 4725 scope.go:117] "RemoveContainer" containerID="ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.091010 4725 scope.go:117] "RemoveContainer" containerID="be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.110121 4725 scope.go:117] "RemoveContainer" containerID="e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.146307 4725 scope.go:117] "RemoveContainer" containerID="666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92" Oct 02 11:39:54 crc kubenswrapper[4725]: E1002 11:39:54.147948 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92\": container with ID starting with 666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92 not found: ID does not exist" containerID="666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.148006 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92"} err="failed to get container status \"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92\": rpc error: code = NotFound desc = could not find container \"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92\": container with ID starting with 666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.148048 4725 scope.go:117] "RemoveContainer" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:39:54 crc kubenswrapper[4725]: E1002 11:39:54.148574 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\": container with ID starting with 6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23 not found: ID does not exist" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.148612 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23"} err="failed to get container status \"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\": rpc error: code = NotFound desc = could not find container \"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\": container with ID starting with 6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.148637 4725 scope.go:117] "RemoveContainer" containerID="06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44" Oct 02 11:39:54 crc kubenswrapper[4725]: E1002 11:39:54.148872 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\": container with ID starting with 06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44 not found: ID does not exist" containerID="06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.148908 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44"} err="failed to get container status \"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\": rpc error: code = NotFound desc = could not find container \"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\": container with ID starting with 06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.148928 4725 scope.go:117] "RemoveContainer" containerID="e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388" Oct 02 11:39:54 crc kubenswrapper[4725]: E1002 11:39:54.149264 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\": container with ID starting with e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388 not found: ID does not exist" containerID="e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.149293 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388"} err="failed to get container status \"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\": rpc error: code = NotFound desc = could not find container \"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\": container with ID starting with e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.149310 4725 scope.go:117] "RemoveContainer" containerID="196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151" Oct 02 11:39:54 crc kubenswrapper[4725]: E1002 11:39:54.149768 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\": container with ID starting with 196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151 not found: ID does not exist" containerID="196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.149803 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151"} err="failed to get container status \"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\": rpc error: code = NotFound desc = could not find container \"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\": container with ID starting with 196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.149836 4725 scope.go:117] "RemoveContainer" containerID="f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732" Oct 02 11:39:54 crc kubenswrapper[4725]: E1002 11:39:54.150501 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\": container with ID starting with f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732 not found: ID does not exist" containerID="f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.150563 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732"} err="failed to get container status \"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\": rpc error: code = NotFound desc = could not find container \"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\": container with ID starting with f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.150651 4725 scope.go:117] "RemoveContainer" containerID="56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519" Oct 02 11:39:54 crc kubenswrapper[4725]: E1002 11:39:54.151129 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\": container with ID starting with 56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519 not found: ID does not exist" containerID="56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.151161 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519"} err="failed to get container status \"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\": rpc error: code = NotFound desc = could not find container \"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\": container with ID starting with 56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.151179 4725 scope.go:117] "RemoveContainer" containerID="ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a" Oct 02 11:39:54 crc kubenswrapper[4725]: E1002 11:39:54.151443 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\": container with ID starting with ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a not found: ID does not exist" containerID="ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.151476 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a"} err="failed to get container status \"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\": rpc error: code = NotFound desc = could not find container \"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\": container with ID starting with ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.151527 4725 scope.go:117] "RemoveContainer" containerID="be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c" Oct 02 11:39:54 crc kubenswrapper[4725]: E1002 11:39:54.151868 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\": container with ID starting with be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c not found: ID does not exist" containerID="be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.151898 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c"} err="failed to get container status \"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\": rpc error: code = NotFound desc = could not find container \"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\": container with ID starting with be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.151916 4725 scope.go:117] "RemoveContainer" containerID="e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39" Oct 02 11:39:54 crc kubenswrapper[4725]: E1002 11:39:54.152383 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\": container with ID starting with e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39 not found: ID does not exist" containerID="e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.152417 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39"} err="failed to get container status \"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\": rpc error: code = NotFound desc = could not find container \"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\": container with ID starting with e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.152437 4725 scope.go:117] "RemoveContainer" containerID="666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.152693 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92"} err="failed to get container status \"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92\": rpc error: code = NotFound desc = could not find container \"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92\": container with ID starting with 666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.152733 4725 scope.go:117] "RemoveContainer" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.153045 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23"} err="failed to get container status \"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\": rpc error: code = NotFound desc = could not find container \"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\": container with ID starting with 6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.153069 4725 scope.go:117] "RemoveContainer" containerID="06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.153324 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44"} err="failed to get container status \"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\": rpc error: code = NotFound desc = could not find container \"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\": container with ID starting with 06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.153351 4725 scope.go:117] "RemoveContainer" containerID="e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.153635 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388"} err="failed to get container status \"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\": rpc error: code = NotFound desc = could not find container \"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\": container with ID starting with e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.153658 4725 scope.go:117] "RemoveContainer" containerID="196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.153948 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151"} err="failed to get container status \"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\": rpc error: code = NotFound desc = could not find container \"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\": container with ID starting with 196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.153969 4725 scope.go:117] "RemoveContainer" containerID="f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.154257 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732"} err="failed to get container status \"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\": rpc error: code = NotFound desc = could not find container \"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\": container with ID starting with f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.154281 4725 scope.go:117] "RemoveContainer" containerID="56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.154542 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519"} err="failed to get container status \"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\": rpc error: code = NotFound desc = could not find container \"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\": container with ID starting with 56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.154562 4725 scope.go:117] "RemoveContainer" containerID="ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.154834 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a"} err="failed to get container status \"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\": rpc error: code = NotFound desc = could not find container \"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\": container with ID starting with ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.154855 4725 scope.go:117] "RemoveContainer" containerID="be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.155103 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c"} err="failed to get container status \"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\": rpc error: code = NotFound desc = could not find container \"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\": container with ID starting with be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.155124 4725 scope.go:117] "RemoveContainer" containerID="e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.155358 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39"} err="failed to get container status \"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\": rpc error: code = NotFound desc = could not find container \"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\": container with ID starting with e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.155390 4725 scope.go:117] "RemoveContainer" containerID="666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.155656 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92"} err="failed to get container status \"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92\": rpc error: code = NotFound desc = could not find container \"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92\": container with ID starting with 666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.155684 4725 scope.go:117] "RemoveContainer" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.156288 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23"} err="failed to get container status \"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\": rpc error: code = NotFound desc = could not find container \"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\": container with ID starting with 6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.156312 4725 scope.go:117] "RemoveContainer" containerID="06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.156621 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44"} err="failed to get container status \"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\": rpc error: code = NotFound desc = could not find container \"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\": container with ID starting with 06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.156644 4725 scope.go:117] "RemoveContainer" containerID="e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.156913 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388"} err="failed to get container status \"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\": rpc error: code = NotFound desc = could not find container \"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\": container with ID starting with e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.156943 4725 scope.go:117] "RemoveContainer" containerID="196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.157317 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151"} err="failed to get container status \"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\": rpc error: code = NotFound desc = could not find container \"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\": container with ID starting with 196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.157345 4725 scope.go:117] "RemoveContainer" containerID="f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.157582 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732"} err="failed to get container status \"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\": rpc error: code = NotFound desc = could not find container \"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\": container with ID starting with f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.157615 4725 scope.go:117] "RemoveContainer" containerID="56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.157975 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519"} err="failed to get container status \"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\": rpc error: code = NotFound desc = could not find container \"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\": container with ID starting with 56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.158003 4725 scope.go:117] "RemoveContainer" containerID="ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.158412 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a"} err="failed to get container status \"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\": rpc error: code = NotFound desc = could not find container \"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\": container with ID starting with ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.158437 4725 scope.go:117] "RemoveContainer" containerID="be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.158773 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c"} err="failed to get container status \"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\": rpc error: code = NotFound desc = could not find container \"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\": container with ID starting with be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.158808 4725 scope.go:117] "RemoveContainer" containerID="e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.159595 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39"} err="failed to get container status \"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\": rpc error: code = NotFound desc = could not find container \"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\": container with ID starting with e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.159616 4725 scope.go:117] "RemoveContainer" containerID="666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.159892 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92"} err="failed to get container status \"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92\": rpc error: code = NotFound desc = could not find container \"666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92\": container with ID starting with 666c01dece0be272e1efd2984c89b8a62a5a085090b88b807b9e38a14d12af92 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.159916 4725 scope.go:117] "RemoveContainer" containerID="6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.160207 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23"} err="failed to get container status \"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\": rpc error: code = NotFound desc = could not find container \"6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23\": container with ID starting with 6cdf074f4bb3552762e3ed99c6b672cce778edc46643a5afe8d22322a15c7f23 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.160228 4725 scope.go:117] "RemoveContainer" containerID="06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.160571 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44"} err="failed to get container status \"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\": rpc error: code = NotFound desc = could not find container \"06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44\": container with ID starting with 06750a9f63cb75c090ae94f546a059110b039a0df102bfc777883829540c5a44 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.160594 4725 scope.go:117] "RemoveContainer" containerID="e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.160864 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388"} err="failed to get container status \"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\": rpc error: code = NotFound desc = could not find container \"e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388\": container with ID starting with e9dd9eeb4b9e8676471aa42ea236226b2237293b198d4cc1228ce8aafe181388 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.160886 4725 scope.go:117] "RemoveContainer" containerID="196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.161264 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151"} err="failed to get container status \"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\": rpc error: code = NotFound desc = could not find container \"196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151\": container with ID starting with 196a8c6f03735cd1f1fcd264ab4898567c1f273a795b950de0b2b5e3065c2151 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.161291 4725 scope.go:117] "RemoveContainer" containerID="f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.161505 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732"} err="failed to get container status \"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\": rpc error: code = NotFound desc = could not find container \"f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732\": container with ID starting with f44e50aaba9eb80a6e80dcf636b8fdb7819d2687df4523f10c2f343ec8696732 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.161528 4725 scope.go:117] "RemoveContainer" containerID="56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.161850 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519"} err="failed to get container status \"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\": rpc error: code = NotFound desc = could not find container \"56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519\": container with ID starting with 56854aa93b5ad4b2777d4b28409d5262d1f23d20ee5fbbdf5ee3c5dde86a4519 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.161874 4725 scope.go:117] "RemoveContainer" containerID="ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.162162 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a"} err="failed to get container status \"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\": rpc error: code = NotFound desc = could not find container \"ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a\": container with ID starting with ae8cd3fd9d550917c83fb1cfef516cb7623fe4bc902f48ed04d6af2791fcdc4a not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.162184 4725 scope.go:117] "RemoveContainer" containerID="be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.162391 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c"} err="failed to get container status \"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\": rpc error: code = NotFound desc = could not find container \"be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c\": container with ID starting with be19e52fe052eaf5b37d1daaec8a639ce48560f038648e6a3e7da42db1ac304c not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.162417 4725 scope.go:117] "RemoveContainer" containerID="e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.162644 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39"} err="failed to get container status \"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\": rpc error: code = NotFound desc = could not find container \"e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39\": container with ID starting with e6e9f9712b8ae7f878e6fd094e4eeddaece8a953746990ce2fc3f938e2c0ca39 not found: ID does not exist" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.855554 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2q2jl_15fc62f2-0a7e-477c-8e35-0888c40e2d6c/kube-multus/2.log" Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.862302 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" event={"ID":"3b9d6183-2d36-4b04-9850-bed46c031965","Type":"ContainerStarted","Data":"70437406eeea95c77b2414252f65dee36525dcf20d22517e904c7608f525c0e4"} Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.862339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" event={"ID":"3b9d6183-2d36-4b04-9850-bed46c031965","Type":"ContainerStarted","Data":"fbb4025aba873d84127adef17a763b85627bb4f0e59a130cfa97602d87066151"} Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.862351 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" event={"ID":"3b9d6183-2d36-4b04-9850-bed46c031965","Type":"ContainerStarted","Data":"2e227d3ecd868cf6a22fe3a49d19c47feffcc447e910ac769c2c7627558918dc"} Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.862362 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" event={"ID":"3b9d6183-2d36-4b04-9850-bed46c031965","Type":"ContainerStarted","Data":"e40cdb0df8d599bdd515933212eb0285d0361da28b2f7aa4ccbefb71d1eef44c"} Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.862371 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" event={"ID":"3b9d6183-2d36-4b04-9850-bed46c031965","Type":"ContainerStarted","Data":"7bca2646020133576f6942bee6c8d39e76ebd9135018f09ec71d2e5c9a0b3964"} Oct 02 11:39:54 crc kubenswrapper[4725]: I1002 11:39:54.862379 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" event={"ID":"3b9d6183-2d36-4b04-9850-bed46c031965","Type":"ContainerStarted","Data":"fb61bd7dadca17ee23f9e946b76fbdcd8009f24509ca8570c5303e6ea85e4daa"} Oct 02 11:39:55 crc kubenswrapper[4725]: I1002 11:39:55.283469 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6cd2823-e7fc-454e-9ec2-e3dcc81472e2" path="/var/lib/kubelet/pods/d6cd2823-e7fc-454e-9ec2-e3dcc81472e2/volumes" Oct 02 11:39:57 crc kubenswrapper[4725]: I1002 11:39:57.893806 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" event={"ID":"3b9d6183-2d36-4b04-9850-bed46c031965","Type":"ContainerStarted","Data":"1e41cfeba13ab13fbf057b04ed6ed8f7af8c551313ded888b03c3c2f7c24f33e"} Oct 02 11:39:59 crc kubenswrapper[4725]: I1002 11:39:59.912389 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" event={"ID":"3b9d6183-2d36-4b04-9850-bed46c031965","Type":"ContainerStarted","Data":"35c2a6ab9c6ae52e5a4f39b2b1d9f531f4b6a895e11e7972f56aa1f5cc8ce752"} Oct 02 11:39:59 crc kubenswrapper[4725]: I1002 11:39:59.912759 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:59 crc kubenswrapper[4725]: I1002 11:39:59.912778 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:39:59 crc kubenswrapper[4725]: I1002 11:39:59.949857 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" podStartSLOduration=6.949829789 podStartE2EDuration="6.949829789s" podCreationTimestamp="2025-10-02 11:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:39:59.948230167 +0000 UTC m=+719.855729630" watchObservedRunningTime="2025-10-02 11:39:59.949829789 +0000 UTC m=+719.857329312" Oct 02 11:39:59 crc kubenswrapper[4725]: I1002 11:39:59.954296 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:40:00 crc kubenswrapper[4725]: I1002 11:40:00.925132 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:40:00 crc kubenswrapper[4725]: I1002 11:40:00.963022 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:40:06 crc kubenswrapper[4725]: I1002 11:40:06.268258 4725 scope.go:117] "RemoveContainer" containerID="e96bdcb9d80476d1369450fc2610306d82c9ba2ad06315db71d6c29ff0f76cb8" Oct 02 11:40:06 crc kubenswrapper[4725]: E1002 11:40:06.269363 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2q2jl_openshift-multus(15fc62f2-0a7e-477c-8e35-0888c40e2d6c)\"" pod="openshift-multus/multus-2q2jl" podUID="15fc62f2-0a7e-477c-8e35-0888c40e2d6c" Oct 02 11:40:18 crc kubenswrapper[4725]: I1002 11:40:18.267624 4725 scope.go:117] "RemoveContainer" containerID="e96bdcb9d80476d1369450fc2610306d82c9ba2ad06315db71d6c29ff0f76cb8" Oct 02 11:40:19 crc kubenswrapper[4725]: I1002 11:40:19.032122 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2q2jl_15fc62f2-0a7e-477c-8e35-0888c40e2d6c/kube-multus/2.log" Oct 02 11:40:19 crc kubenswrapper[4725]: I1002 11:40:19.032541 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2q2jl" event={"ID":"15fc62f2-0a7e-477c-8e35-0888c40e2d6c","Type":"ContainerStarted","Data":"50652dc9a375b4a23ea0c3d19704df800082e06919455ef5d1a78c0d17f82297"} Oct 02 11:40:23 crc kubenswrapper[4725]: I1002 11:40:23.540086 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c4r2t" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.370360 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz"] Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.372651 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.380191 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz"] Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.382291 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.538084 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.538167 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.538194 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6vzk\" (UniqueName: \"kubernetes.io/projected/f1f9f2b2-614f-454e-8f94-af2108154130-kube-api-access-k6vzk\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.638938 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.639024 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.639050 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6vzk\" (UniqueName: \"kubernetes.io/projected/f1f9f2b2-614f-454e-8f94-af2108154130-kube-api-access-k6vzk\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.639469 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.639636 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.663381 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6vzk\" (UniqueName: \"kubernetes.io/projected/f1f9f2b2-614f-454e-8f94-af2108154130-kube-api-access-k6vzk\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.689240 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:32 crc kubenswrapper[4725]: I1002 11:40:32.901851 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz"] Oct 02 11:40:33 crc kubenswrapper[4725]: I1002 11:40:33.121300 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" event={"ID":"f1f9f2b2-614f-454e-8f94-af2108154130","Type":"ContainerStarted","Data":"d365a26d0911b97128a905ad9fecc1bbc992f77a3de9ee6279f634f1ee59f691"} Oct 02 11:40:33 crc kubenswrapper[4725]: I1002 11:40:33.121343 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" event={"ID":"f1f9f2b2-614f-454e-8f94-af2108154130","Type":"ContainerStarted","Data":"a2d5a182afb20a0241f71a329047ef357af93ffc6ca887c012cf636f464f1743"} Oct 02 11:40:34 crc kubenswrapper[4725]: I1002 11:40:34.128710 4725 generic.go:334] "Generic (PLEG): container finished" podID="f1f9f2b2-614f-454e-8f94-af2108154130" containerID="d365a26d0911b97128a905ad9fecc1bbc992f77a3de9ee6279f634f1ee59f691" exitCode=0 Oct 02 11:40:34 crc kubenswrapper[4725]: I1002 11:40:34.128802 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" event={"ID":"f1f9f2b2-614f-454e-8f94-af2108154130","Type":"ContainerDied","Data":"d365a26d0911b97128a905ad9fecc1bbc992f77a3de9ee6279f634f1ee59f691"} Oct 02 11:40:36 crc kubenswrapper[4725]: I1002 11:40:36.144473 4725 generic.go:334] "Generic (PLEG): container finished" podID="f1f9f2b2-614f-454e-8f94-af2108154130" containerID="fd902395521caa24acdaaee80b80c0deb8a3f85a7e2a992f4e4fb993e438bd77" exitCode=0 Oct 02 11:40:36 crc kubenswrapper[4725]: I1002 11:40:36.144814 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" event={"ID":"f1f9f2b2-614f-454e-8f94-af2108154130","Type":"ContainerDied","Data":"fd902395521caa24acdaaee80b80c0deb8a3f85a7e2a992f4e4fb993e438bd77"} Oct 02 11:40:37 crc kubenswrapper[4725]: I1002 11:40:37.156537 4725 generic.go:334] "Generic (PLEG): container finished" podID="f1f9f2b2-614f-454e-8f94-af2108154130" containerID="b9d40a1ded4603359f3b1a4cda1f250fc917da4d6efe493daf865451186ab00b" exitCode=0 Oct 02 11:40:37 crc kubenswrapper[4725]: I1002 11:40:37.156599 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" event={"ID":"f1f9f2b2-614f-454e-8f94-af2108154130","Type":"ContainerDied","Data":"b9d40a1ded4603359f3b1a4cda1f250fc917da4d6efe493daf865451186ab00b"} Oct 02 11:40:38 crc kubenswrapper[4725]: I1002 11:40:38.389982 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:38 crc kubenswrapper[4725]: I1002 11:40:38.511827 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-util\") pod \"f1f9f2b2-614f-454e-8f94-af2108154130\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " Oct 02 11:40:38 crc kubenswrapper[4725]: I1002 11:40:38.511930 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6vzk\" (UniqueName: \"kubernetes.io/projected/f1f9f2b2-614f-454e-8f94-af2108154130-kube-api-access-k6vzk\") pod \"f1f9f2b2-614f-454e-8f94-af2108154130\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " Oct 02 11:40:38 crc kubenswrapper[4725]: I1002 11:40:38.511963 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-bundle\") pod \"f1f9f2b2-614f-454e-8f94-af2108154130\" (UID: \"f1f9f2b2-614f-454e-8f94-af2108154130\") " Oct 02 11:40:38 crc kubenswrapper[4725]: I1002 11:40:38.512684 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-bundle" (OuterVolumeSpecName: "bundle") pod "f1f9f2b2-614f-454e-8f94-af2108154130" (UID: "f1f9f2b2-614f-454e-8f94-af2108154130"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:40:38 crc kubenswrapper[4725]: I1002 11:40:38.519147 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f9f2b2-614f-454e-8f94-af2108154130-kube-api-access-k6vzk" (OuterVolumeSpecName: "kube-api-access-k6vzk") pod "f1f9f2b2-614f-454e-8f94-af2108154130" (UID: "f1f9f2b2-614f-454e-8f94-af2108154130"). InnerVolumeSpecName "kube-api-access-k6vzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:40:38 crc kubenswrapper[4725]: I1002 11:40:38.526612 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-util" (OuterVolumeSpecName: "util") pod "f1f9f2b2-614f-454e-8f94-af2108154130" (UID: "f1f9f2b2-614f-454e-8f94-af2108154130"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:40:38 crc kubenswrapper[4725]: I1002 11:40:38.613888 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6vzk\" (UniqueName: \"kubernetes.io/projected/f1f9f2b2-614f-454e-8f94-af2108154130-kube-api-access-k6vzk\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:38 crc kubenswrapper[4725]: I1002 11:40:38.613933 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:38 crc kubenswrapper[4725]: I1002 11:40:38.613942 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1f9f2b2-614f-454e-8f94-af2108154130-util\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:39 crc kubenswrapper[4725]: I1002 11:40:39.168044 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" event={"ID":"f1f9f2b2-614f-454e-8f94-af2108154130","Type":"ContainerDied","Data":"a2d5a182afb20a0241f71a329047ef357af93ffc6ca887c012cf636f464f1743"} Oct 02 11:40:39 crc kubenswrapper[4725]: I1002 11:40:39.168081 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2d5a182afb20a0241f71a329047ef357af93ffc6ca887c012cf636f464f1743" Oct 02 11:40:39 crc kubenswrapper[4725]: I1002 11:40:39.168108 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz" Oct 02 11:40:40 crc kubenswrapper[4725]: I1002 11:40:40.655425 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m285b"] Oct 02 11:40:40 crc kubenswrapper[4725]: I1002 11:40:40.657337 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" podUID="958f4455-ed96-4896-b03c-dec837e33311" containerName="controller-manager" containerID="cri-o://e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be" gracePeriod=30 Oct 02 11:40:40 crc kubenswrapper[4725]: I1002 11:40:40.741108 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l"] Oct 02 11:40:40 crc kubenswrapper[4725]: I1002 11:40:40.741395 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" podUID="b8342476-d06a-48c8-84de-89b1531728e1" containerName="route-controller-manager" containerID="cri-o://aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659" gracePeriod=30 Oct 02 11:40:40 crc kubenswrapper[4725]: I1002 11:40:40.990354 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.056760 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.147111 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42l6r\" (UniqueName: \"kubernetes.io/projected/958f4455-ed96-4896-b03c-dec837e33311-kube-api-access-42l6r\") pod \"958f4455-ed96-4896-b03c-dec837e33311\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.147181 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8342476-d06a-48c8-84de-89b1531728e1-serving-cert\") pod \"b8342476-d06a-48c8-84de-89b1531728e1\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.147231 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-client-ca\") pod \"958f4455-ed96-4896-b03c-dec837e33311\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.147286 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-config\") pod \"958f4455-ed96-4896-b03c-dec837e33311\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.147360 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-proxy-ca-bundles\") pod \"958f4455-ed96-4896-b03c-dec837e33311\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.147404 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm29n\" (UniqueName: \"kubernetes.io/projected/b8342476-d06a-48c8-84de-89b1531728e1-kube-api-access-wm29n\") pod \"b8342476-d06a-48c8-84de-89b1531728e1\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.147428 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-client-ca\") pod \"b8342476-d06a-48c8-84de-89b1531728e1\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.147450 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/958f4455-ed96-4896-b03c-dec837e33311-serving-cert\") pod \"958f4455-ed96-4896-b03c-dec837e33311\" (UID: \"958f4455-ed96-4896-b03c-dec837e33311\") " Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.148007 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "958f4455-ed96-4896-b03c-dec837e33311" (UID: "958f4455-ed96-4896-b03c-dec837e33311"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.148026 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-client-ca" (OuterVolumeSpecName: "client-ca") pod "958f4455-ed96-4896-b03c-dec837e33311" (UID: "958f4455-ed96-4896-b03c-dec837e33311"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.148076 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-client-ca" (OuterVolumeSpecName: "client-ca") pod "b8342476-d06a-48c8-84de-89b1531728e1" (UID: "b8342476-d06a-48c8-84de-89b1531728e1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.148088 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-config\") pod \"b8342476-d06a-48c8-84de-89b1531728e1\" (UID: \"b8342476-d06a-48c8-84de-89b1531728e1\") " Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.148133 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-config" (OuterVolumeSpecName: "config") pod "958f4455-ed96-4896-b03c-dec837e33311" (UID: "958f4455-ed96-4896-b03c-dec837e33311"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.148524 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-config" (OuterVolumeSpecName: "config") pod "b8342476-d06a-48c8-84de-89b1531728e1" (UID: "b8342476-d06a-48c8-84de-89b1531728e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.148549 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.148569 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-client-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.148582 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-client-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.148594 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/958f4455-ed96-4896-b03c-dec837e33311-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.155539 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8342476-d06a-48c8-84de-89b1531728e1-kube-api-access-wm29n" (OuterVolumeSpecName: "kube-api-access-wm29n") pod "b8342476-d06a-48c8-84de-89b1531728e1" (UID: "b8342476-d06a-48c8-84de-89b1531728e1"). InnerVolumeSpecName "kube-api-access-wm29n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.155580 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/958f4455-ed96-4896-b03c-dec837e33311-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "958f4455-ed96-4896-b03c-dec837e33311" (UID: "958f4455-ed96-4896-b03c-dec837e33311"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.155594 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8342476-d06a-48c8-84de-89b1531728e1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b8342476-d06a-48c8-84de-89b1531728e1" (UID: "b8342476-d06a-48c8-84de-89b1531728e1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.157303 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/958f4455-ed96-4896-b03c-dec837e33311-kube-api-access-42l6r" (OuterVolumeSpecName: "kube-api-access-42l6r") pod "958f4455-ed96-4896-b03c-dec837e33311" (UID: "958f4455-ed96-4896-b03c-dec837e33311"). InnerVolumeSpecName "kube-api-access-42l6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.180341 4725 generic.go:334] "Generic (PLEG): container finished" podID="b8342476-d06a-48c8-84de-89b1531728e1" containerID="aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659" exitCode=0 Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.180389 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" event={"ID":"b8342476-d06a-48c8-84de-89b1531728e1","Type":"ContainerDied","Data":"aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659"} Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.180427 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" event={"ID":"b8342476-d06a-48c8-84de-89b1531728e1","Type":"ContainerDied","Data":"c818d2be84b8f34945cece373a06369c02144720c562f5d6810e0b37cf72d9a1"} Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.180444 4725 scope.go:117] "RemoveContainer" containerID="aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.180402 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.182463 4725 generic.go:334] "Generic (PLEG): container finished" podID="958f4455-ed96-4896-b03c-dec837e33311" containerID="e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be" exitCode=0 Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.182494 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" event={"ID":"958f4455-ed96-4896-b03c-dec837e33311","Type":"ContainerDied","Data":"e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be"} Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.182517 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.182520 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m285b" event={"ID":"958f4455-ed96-4896-b03c-dec837e33311","Type":"ContainerDied","Data":"8af9b9b48b51e10fe1cd25dd7de2ac4e6efdc4d4730cc495f179f9188ffcb539"} Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.200337 4725 scope.go:117] "RemoveContainer" containerID="aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659" Oct 02 11:40:41 crc kubenswrapper[4725]: E1002 11:40:41.200794 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659\": container with ID starting with aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659 not found: ID does not exist" containerID="aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.200843 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659"} err="failed to get container status \"aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659\": rpc error: code = NotFound desc = could not find container \"aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659\": container with ID starting with aeb87baabb37c4797d7fb893f841bda9cf8177ac17e8af96f9c1c1aaf40f5659 not found: ID does not exist" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.200871 4725 scope.go:117] "RemoveContainer" containerID="e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.217017 4725 scope.go:117] "RemoveContainer" containerID="e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be" Oct 02 11:40:41 crc kubenswrapper[4725]: E1002 11:40:41.217566 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be\": container with ID starting with e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be not found: ID does not exist" containerID="e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.217623 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be"} err="failed to get container status \"e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be\": rpc error: code = NotFound desc = could not find container \"e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be\": container with ID starting with e5c023e2f3183f6d334ae23cee9195aa4209cbd23434cdec247082d1fc61d6be not found: ID does not exist" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.220510 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l"] Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.223303 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6gs7l"] Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.231446 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m285b"] Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.236056 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m285b"] Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.249918 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm29n\" (UniqueName: \"kubernetes.io/projected/b8342476-d06a-48c8-84de-89b1531728e1-kube-api-access-wm29n\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.249967 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/958f4455-ed96-4896-b03c-dec837e33311-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.249985 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8342476-d06a-48c8-84de-89b1531728e1-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.250002 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42l6r\" (UniqueName: \"kubernetes.io/projected/958f4455-ed96-4896-b03c-dec837e33311-kube-api-access-42l6r\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.250018 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8342476-d06a-48c8-84de-89b1531728e1-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.277665 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="958f4455-ed96-4896-b03c-dec837e33311" path="/var/lib/kubelet/pods/958f4455-ed96-4896-b03c-dec837e33311/volumes" Oct 02 11:40:41 crc kubenswrapper[4725]: I1002 11:40:41.278846 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8342476-d06a-48c8-84de-89b1531728e1" path="/var/lib/kubelet/pods/b8342476-d06a-48c8-84de-89b1531728e1/volumes" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.154156 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t"] Oct 02 11:40:42 crc kubenswrapper[4725]: E1002 11:40:42.154555 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f9f2b2-614f-454e-8f94-af2108154130" containerName="util" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.154584 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f9f2b2-614f-454e-8f94-af2108154130" containerName="util" Oct 02 11:40:42 crc kubenswrapper[4725]: E1002 11:40:42.154607 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958f4455-ed96-4896-b03c-dec837e33311" containerName="controller-manager" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.154620 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="958f4455-ed96-4896-b03c-dec837e33311" containerName="controller-manager" Oct 02 11:40:42 crc kubenswrapper[4725]: E1002 11:40:42.154648 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8342476-d06a-48c8-84de-89b1531728e1" containerName="route-controller-manager" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.154661 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8342476-d06a-48c8-84de-89b1531728e1" containerName="route-controller-manager" Oct 02 11:40:42 crc kubenswrapper[4725]: E1002 11:40:42.154684 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f9f2b2-614f-454e-8f94-af2108154130" containerName="pull" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.154696 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f9f2b2-614f-454e-8f94-af2108154130" containerName="pull" Oct 02 11:40:42 crc kubenswrapper[4725]: E1002 11:40:42.154718 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f9f2b2-614f-454e-8f94-af2108154130" containerName="extract" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.154764 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f9f2b2-614f-454e-8f94-af2108154130" containerName="extract" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.154965 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f9f2b2-614f-454e-8f94-af2108154130" containerName="extract" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.154987 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8342476-d06a-48c8-84de-89b1531728e1" containerName="route-controller-manager" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.155016 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="958f4455-ed96-4896-b03c-dec837e33311" containerName="controller-manager" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.155642 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.158281 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.158595 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.158645 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.158954 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.159089 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.159188 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85bd4fb68-zcj5x"] Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.159240 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.160375 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.179795 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.180135 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.180417 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.180141 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.180805 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.181461 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.189107 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-proxy-ca-bundles\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.189173 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29tjc\" (UniqueName: \"kubernetes.io/projected/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-kube-api-access-29tjc\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.189230 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-config\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.189267 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-client-ca\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.189471 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-serving-cert\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.202028 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t"] Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.220418 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85bd4fb68-zcj5x"] Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.226810 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.245803 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t"] Oct 02 11:40:42 crc kubenswrapper[4725]: E1002 11:40:42.247770 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-9bc8w serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" podUID="e5c46d62-0d85-42b3-8385-f6e7af8afcc5" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.291543 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-serving-cert\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.292311 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-proxy-ca-bundles\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.292375 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-config\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.292420 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29tjc\" (UniqueName: \"kubernetes.io/projected/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-kube-api-access-29tjc\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.292568 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-config\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.292619 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-client-ca\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.292682 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-serving-cert\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.292789 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bc8w\" (UniqueName: \"kubernetes.io/projected/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-kube-api-access-9bc8w\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.292831 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-client-ca\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.293176 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-proxy-ca-bundles\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.293833 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-client-ca\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.294409 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-config\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.295117 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-serving-cert\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.311511 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29tjc\" (UniqueName: \"kubernetes.io/projected/7b5903b0-f701-4cfc-8a28-e7f90d398a4d-kube-api-access-29tjc\") pod \"controller-manager-85bd4fb68-zcj5x\" (UID: \"7b5903b0-f701-4cfc-8a28-e7f90d398a4d\") " pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.393123 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-serving-cert\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.393182 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bc8w\" (UniqueName: \"kubernetes.io/projected/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-kube-api-access-9bc8w\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.393203 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-client-ca\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.393250 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-config\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.394111 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-client-ca\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.394309 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-config\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.396931 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-serving-cert\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.410374 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bc8w\" (UniqueName: \"kubernetes.io/projected/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-kube-api-access-9bc8w\") pod \"route-controller-manager-dd8fb74b5-zdn7t\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.495505 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:42 crc kubenswrapper[4725]: I1002 11:40:42.743010 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85bd4fb68-zcj5x"] Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.209093 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" event={"ID":"7b5903b0-f701-4cfc-8a28-e7f90d398a4d","Type":"ContainerStarted","Data":"378c9221c941d29ed020e0d47e76bfb2916e5450e9ce32fd327240c5deed4249"} Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.209456 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" event={"ID":"7b5903b0-f701-4cfc-8a28-e7f90d398a4d","Type":"ContainerStarted","Data":"de18354295abe7f67d7e698269f8a5ce53644f1432f28d2e6af65f116060f26f"} Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.209473 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.209127 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.216856 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.231840 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" podStartSLOduration=3.231821924 podStartE2EDuration="3.231821924s" podCreationTimestamp="2025-10-02 11:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:40:43.228103857 +0000 UTC m=+763.135603330" watchObservedRunningTime="2025-10-02 11:40:43.231821924 +0000 UTC m=+763.139321387" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.266129 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85bd4fb68-zcj5x" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.405482 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-client-ca\") pod \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.405529 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bc8w\" (UniqueName: \"kubernetes.io/projected/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-kube-api-access-9bc8w\") pod \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.405615 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-config\") pod \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.405648 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-serving-cert\") pod \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\" (UID: \"e5c46d62-0d85-42b3-8385-f6e7af8afcc5\") " Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.407125 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5c46d62-0d85-42b3-8385-f6e7af8afcc5" (UID: "e5c46d62-0d85-42b3-8385-f6e7af8afcc5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.407412 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-config" (OuterVolumeSpecName: "config") pod "e5c46d62-0d85-42b3-8385-f6e7af8afcc5" (UID: "e5c46d62-0d85-42b3-8385-f6e7af8afcc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.411649 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5c46d62-0d85-42b3-8385-f6e7af8afcc5" (UID: "e5c46d62-0d85-42b3-8385-f6e7af8afcc5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.416762 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-kube-api-access-9bc8w" (OuterVolumeSpecName: "kube-api-access-9bc8w") pod "e5c46d62-0d85-42b3-8385-f6e7af8afcc5" (UID: "e5c46d62-0d85-42b3-8385-f6e7af8afcc5"). InnerVolumeSpecName "kube-api-access-9bc8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.506651 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-client-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.506692 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bc8w\" (UniqueName: \"kubernetes.io/projected/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-kube-api-access-9bc8w\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.506707 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.506734 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5c46d62-0d85-42b3-8385-f6e7af8afcc5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.867010 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-mdqp5"] Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.868156 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-mdqp5" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.870002 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.870136 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-7sdhq" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.870393 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 02 11:40:43 crc kubenswrapper[4725]: I1002 11:40:43.877465 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-mdqp5"] Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.011446 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phvth\" (UniqueName: \"kubernetes.io/projected/b4436bab-3a23-4c24-bb5b-fdd06e5c2b78-kube-api-access-phvth\") pod \"nmstate-operator-858ddd8f98-mdqp5\" (UID: \"b4436bab-3a23-4c24-bb5b-fdd06e5c2b78\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-mdqp5" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.113083 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phvth\" (UniqueName: \"kubernetes.io/projected/b4436bab-3a23-4c24-bb5b-fdd06e5c2b78-kube-api-access-phvth\") pod \"nmstate-operator-858ddd8f98-mdqp5\" (UID: \"b4436bab-3a23-4c24-bb5b-fdd06e5c2b78\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-mdqp5" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.134934 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phvth\" (UniqueName: \"kubernetes.io/projected/b4436bab-3a23-4c24-bb5b-fdd06e5c2b78-kube-api-access-phvth\") pod \"nmstate-operator-858ddd8f98-mdqp5\" (UID: \"b4436bab-3a23-4c24-bb5b-fdd06e5c2b78\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-mdqp5" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.180774 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-mdqp5" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.214834 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.256280 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv"] Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.256896 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.260088 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.260196 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.260235 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.260250 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.260464 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.260518 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.260600 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t"] Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.263155 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd8fb74b5-zdn7t"] Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.267893 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv"] Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.316393 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39349673-b7ab-4638-8bf8-151e17fabcaa-serving-cert\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.317237 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39349673-b7ab-4638-8bf8-151e17fabcaa-config\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.317404 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snd2n\" (UniqueName: \"kubernetes.io/projected/39349673-b7ab-4638-8bf8-151e17fabcaa-kube-api-access-snd2n\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.317475 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39349673-b7ab-4638-8bf8-151e17fabcaa-client-ca\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.418105 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39349673-b7ab-4638-8bf8-151e17fabcaa-config\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.418183 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snd2n\" (UniqueName: \"kubernetes.io/projected/39349673-b7ab-4638-8bf8-151e17fabcaa-kube-api-access-snd2n\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.418213 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39349673-b7ab-4638-8bf8-151e17fabcaa-client-ca\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.418113 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-mdqp5"] Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.418290 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39349673-b7ab-4638-8bf8-151e17fabcaa-serving-cert\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.419178 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39349673-b7ab-4638-8bf8-151e17fabcaa-client-ca\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.420885 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39349673-b7ab-4638-8bf8-151e17fabcaa-config\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.424314 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39349673-b7ab-4638-8bf8-151e17fabcaa-serving-cert\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.439055 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snd2n\" (UniqueName: \"kubernetes.io/projected/39349673-b7ab-4638-8bf8-151e17fabcaa-kube-api-access-snd2n\") pod \"route-controller-manager-699c5d854d-pmhtv\" (UID: \"39349673-b7ab-4638-8bf8-151e17fabcaa\") " pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.581850 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:44 crc kubenswrapper[4725]: I1002 11:40:44.776619 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv"] Oct 02 11:40:45 crc kubenswrapper[4725]: I1002 11:40:45.222066 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" event={"ID":"39349673-b7ab-4638-8bf8-151e17fabcaa","Type":"ContainerStarted","Data":"eab9cc19ea8cc22779d274eaa1af8150ab833443410429284ed3180ba52fedcb"} Oct 02 11:40:45 crc kubenswrapper[4725]: I1002 11:40:45.222219 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" event={"ID":"39349673-b7ab-4638-8bf8-151e17fabcaa","Type":"ContainerStarted","Data":"ab11d47a1d961dd7b91f41efa9445bd4f7cc4284e409df4dcc8c484d01a3e0fb"} Oct 02 11:40:45 crc kubenswrapper[4725]: I1002 11:40:45.222306 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:45 crc kubenswrapper[4725]: I1002 11:40:45.223339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-mdqp5" event={"ID":"b4436bab-3a23-4c24-bb5b-fdd06e5c2b78","Type":"ContainerStarted","Data":"7efd3382d611d2cc52c8fdbc71366a1768fe49f9fbcea7787f268f21d279c61c"} Oct 02 11:40:45 crc kubenswrapper[4725]: I1002 11:40:45.243641 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" podStartSLOduration=3.24362201 podStartE2EDuration="3.24362201s" podCreationTimestamp="2025-10-02 11:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:40:45.240371216 +0000 UTC m=+765.147870689" watchObservedRunningTime="2025-10-02 11:40:45.24362201 +0000 UTC m=+765.151121483" Oct 02 11:40:45 crc kubenswrapper[4725]: I1002 11:40:45.275960 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c46d62-0d85-42b3-8385-f6e7af8afcc5" path="/var/lib/kubelet/pods/e5c46d62-0d85-42b3-8385-f6e7af8afcc5/volumes" Oct 02 11:40:45 crc kubenswrapper[4725]: I1002 11:40:45.431121 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-699c5d854d-pmhtv" Oct 02 11:40:48 crc kubenswrapper[4725]: I1002 11:40:48.239106 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-mdqp5" event={"ID":"b4436bab-3a23-4c24-bb5b-fdd06e5c2b78","Type":"ContainerStarted","Data":"ef274f4d5d1ea5b6f9ea7b1276478be95bfcf16fa03e91e8f72641d11c949533"} Oct 02 11:40:48 crc kubenswrapper[4725]: I1002 11:40:48.255730 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-mdqp5" podStartSLOduration=2.279431044 podStartE2EDuration="5.25570659s" podCreationTimestamp="2025-10-02 11:40:43 +0000 UTC" firstStartedPulling="2025-10-02 11:40:44.431403298 +0000 UTC m=+764.338902761" lastFinishedPulling="2025-10-02 11:40:47.407678844 +0000 UTC m=+767.315178307" observedRunningTime="2025-10-02 11:40:48.253247065 +0000 UTC m=+768.160746568" watchObservedRunningTime="2025-10-02 11:40:48.25570659 +0000 UTC m=+768.163206043" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.726857 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz"] Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.728186 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.730469 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bdmxz" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.737032 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz"] Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.761059 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mg76p"] Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.761955 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.763873 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z"] Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.764416 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.769247 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.783531 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z"] Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.824166 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm8wz\" (UniqueName: \"kubernetes.io/projected/36df4296-bcd2-4c2a-b6f7-eef03e21d934-kube-api-access-gm8wz\") pod \"nmstate-metrics-fdff9cb8d-xgxbz\" (UID: \"36df4296-bcd2-4c2a-b6f7-eef03e21d934\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.857917 4725 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.899249 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4"] Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.900115 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.902218 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-4qm6t" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.902434 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.902644 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.911227 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4"] Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.925056 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7d2d0930-f603-4f33-9da1-f7d372d70912-ovs-socket\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.925122 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm8wz\" (UniqueName: \"kubernetes.io/projected/36df4296-bcd2-4c2a-b6f7-eef03e21d934-kube-api-access-gm8wz\") pod \"nmstate-metrics-fdff9cb8d-xgxbz\" (UID: \"36df4296-bcd2-4c2a-b6f7-eef03e21d934\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.925180 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7d2d0930-f603-4f33-9da1-f7d372d70912-dbus-socket\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.925205 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7z5q\" (UniqueName: \"kubernetes.io/projected/7ff10093-4a58-4838-88a7-cb77f8ae577a-kube-api-access-p7z5q\") pod \"nmstate-webhook-6cdbc54649-c8s4z\" (UID: \"7ff10093-4a58-4838-88a7-cb77f8ae577a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.925238 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7ff10093-4a58-4838-88a7-cb77f8ae577a-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-c8s4z\" (UID: \"7ff10093-4a58-4838-88a7-cb77f8ae577a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.925259 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7d2d0930-f603-4f33-9da1-f7d372d70912-nmstate-lock\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.925295 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sch8j\" (UniqueName: \"kubernetes.io/projected/7d2d0930-f603-4f33-9da1-f7d372d70912-kube-api-access-sch8j\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:52 crc kubenswrapper[4725]: I1002 11:40:52.973518 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm8wz\" (UniqueName: \"kubernetes.io/projected/36df4296-bcd2-4c2a-b6f7-eef03e21d934-kube-api-access-gm8wz\") pod \"nmstate-metrics-fdff9cb8d-xgxbz\" (UID: \"36df4296-bcd2-4c2a-b6f7-eef03e21d934\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.026321 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7d2d0930-f603-4f33-9da1-f7d372d70912-dbus-socket\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.026370 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsz8s\" (UniqueName: \"kubernetes.io/projected/76aebb91-1906-47b5-8efc-f4e8290b9ffb-kube-api-access-fsz8s\") pod \"nmstate-console-plugin-6b874cbd85-bm7x4\" (UID: \"76aebb91-1906-47b5-8efc-f4e8290b9ffb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.026396 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7z5q\" (UniqueName: \"kubernetes.io/projected/7ff10093-4a58-4838-88a7-cb77f8ae577a-kube-api-access-p7z5q\") pod \"nmstate-webhook-6cdbc54649-c8s4z\" (UID: \"7ff10093-4a58-4838-88a7-cb77f8ae577a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.026418 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/76aebb91-1906-47b5-8efc-f4e8290b9ffb-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-bm7x4\" (UID: \"76aebb91-1906-47b5-8efc-f4e8290b9ffb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.026437 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7ff10093-4a58-4838-88a7-cb77f8ae577a-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-c8s4z\" (UID: \"7ff10093-4a58-4838-88a7-cb77f8ae577a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.026457 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7d2d0930-f603-4f33-9da1-f7d372d70912-nmstate-lock\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.026486 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/76aebb91-1906-47b5-8efc-f4e8290b9ffb-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-bm7x4\" (UID: \"76aebb91-1906-47b5-8efc-f4e8290b9ffb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.026516 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sch8j\" (UniqueName: \"kubernetes.io/projected/7d2d0930-f603-4f33-9da1-f7d372d70912-kube-api-access-sch8j\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.026539 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7d2d0930-f603-4f33-9da1-f7d372d70912-ovs-socket\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.026649 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7d2d0930-f603-4f33-9da1-f7d372d70912-ovs-socket\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.026958 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7d2d0930-f603-4f33-9da1-f7d372d70912-dbus-socket\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.027699 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7d2d0930-f603-4f33-9da1-f7d372d70912-nmstate-lock\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.030626 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7ff10093-4a58-4838-88a7-cb77f8ae577a-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-c8s4z\" (UID: \"7ff10093-4a58-4838-88a7-cb77f8ae577a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.047676 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sch8j\" (UniqueName: \"kubernetes.io/projected/7d2d0930-f603-4f33-9da1-f7d372d70912-kube-api-access-sch8j\") pod \"nmstate-handler-mg76p\" (UID: \"7d2d0930-f603-4f33-9da1-f7d372d70912\") " pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.048295 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7z5q\" (UniqueName: \"kubernetes.io/projected/7ff10093-4a58-4838-88a7-cb77f8ae577a-kube-api-access-p7z5q\") pod \"nmstate-webhook-6cdbc54649-c8s4z\" (UID: \"7ff10093-4a58-4838-88a7-cb77f8ae577a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.060329 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.078357 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.089156 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.127220 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsz8s\" (UniqueName: \"kubernetes.io/projected/76aebb91-1906-47b5-8efc-f4e8290b9ffb-kube-api-access-fsz8s\") pod \"nmstate-console-plugin-6b874cbd85-bm7x4\" (UID: \"76aebb91-1906-47b5-8efc-f4e8290b9ffb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.127354 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/76aebb91-1906-47b5-8efc-f4e8290b9ffb-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-bm7x4\" (UID: \"76aebb91-1906-47b5-8efc-f4e8290b9ffb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.127403 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/76aebb91-1906-47b5-8efc-f4e8290b9ffb-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-bm7x4\" (UID: \"76aebb91-1906-47b5-8efc-f4e8290b9ffb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.132700 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/76aebb91-1906-47b5-8efc-f4e8290b9ffb-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-bm7x4\" (UID: \"76aebb91-1906-47b5-8efc-f4e8290b9ffb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.138901 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/76aebb91-1906-47b5-8efc-f4e8290b9ffb-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-bm7x4\" (UID: \"76aebb91-1906-47b5-8efc-f4e8290b9ffb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.149021 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsz8s\" (UniqueName: \"kubernetes.io/projected/76aebb91-1906-47b5-8efc-f4e8290b9ffb-kube-api-access-fsz8s\") pod \"nmstate-console-plugin-6b874cbd85-bm7x4\" (UID: \"76aebb91-1906-47b5-8efc-f4e8290b9ffb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.239701 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.272499 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mg76p" event={"ID":"7d2d0930-f603-4f33-9da1-f7d372d70912","Type":"ContainerStarted","Data":"cb2f7a48cf711508adf92b36e8d78ec7c7ff33d07b8dd170e51574a604c6030c"} Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.484608 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz"] Oct 02 11:40:53 crc kubenswrapper[4725]: W1002 11:40:53.499394 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36df4296_bcd2_4c2a_b6f7_eef03e21d934.slice/crio-041c21fb8e937c54feab32f727b2546da30358a47bce424c470deb60078496d9 WatchSource:0}: Error finding container 041c21fb8e937c54feab32f727b2546da30358a47bce424c470deb60078496d9: Status 404 returned error can't find the container with id 041c21fb8e937c54feab32f727b2546da30358a47bce424c470deb60078496d9 Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.514390 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-75f4c4c495-jbg5v"] Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.515284 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.527978 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75f4c4c495-jbg5v"] Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.574689 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z"] Oct 02 11:40:53 crc kubenswrapper[4725]: W1002 11:40:53.577601 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff10093_4a58_4838_88a7_cb77f8ae577a.slice/crio-cccc61ec41fce2ad0392a8585e43f04f2404412048721dd517353c75095c1648 WatchSource:0}: Error finding container cccc61ec41fce2ad0392a8585e43f04f2404412048721dd517353c75095c1648: Status 404 returned error can't find the container with id cccc61ec41fce2ad0392a8585e43f04f2404412048721dd517353c75095c1648 Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.634636 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-service-ca\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.634704 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-oauth-serving-cert\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.634759 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf2k8\" (UniqueName: \"kubernetes.io/projected/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-kube-api-access-tf2k8\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.634796 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-console-oauth-config\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.634834 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-trusted-ca-bundle\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.634859 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-console-config\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.634910 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-console-serving-cert\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.664833 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4"] Oct 02 11:40:53 crc kubenswrapper[4725]: W1002 11:40:53.670739 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76aebb91_1906_47b5_8efc_f4e8290b9ffb.slice/crio-cc187f6e20c8ff107129256f6b83c7a820e9020c5322d13b450a98f8cd343da7 WatchSource:0}: Error finding container cc187f6e20c8ff107129256f6b83c7a820e9020c5322d13b450a98f8cd343da7: Status 404 returned error can't find the container with id cc187f6e20c8ff107129256f6b83c7a820e9020c5322d13b450a98f8cd343da7 Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.735532 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf2k8\" (UniqueName: \"kubernetes.io/projected/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-kube-api-access-tf2k8\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.735622 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-console-oauth-config\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.735651 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-trusted-ca-bundle\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.735669 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-console-config\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.735708 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-console-serving-cert\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.735772 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-service-ca\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.735798 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-oauth-serving-cert\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.736626 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-oauth-serving-cert\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.737686 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-console-config\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.738020 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-trusted-ca-bundle\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.738225 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-service-ca\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.740565 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-console-oauth-config\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.740631 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-console-serving-cert\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.751154 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf2k8\" (UniqueName: \"kubernetes.io/projected/7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22-kube-api-access-tf2k8\") pod \"console-75f4c4c495-jbg5v\" (UID: \"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22\") " pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:53 crc kubenswrapper[4725]: I1002 11:40:53.841979 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:40:54 crc kubenswrapper[4725]: I1002 11:40:54.241610 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75f4c4c495-jbg5v"] Oct 02 11:40:54 crc kubenswrapper[4725]: I1002 11:40:54.276254 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" event={"ID":"76aebb91-1906-47b5-8efc-f4e8290b9ffb","Type":"ContainerStarted","Data":"cc187f6e20c8ff107129256f6b83c7a820e9020c5322d13b450a98f8cd343da7"} Oct 02 11:40:54 crc kubenswrapper[4725]: I1002 11:40:54.277206 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75f4c4c495-jbg5v" event={"ID":"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22","Type":"ContainerStarted","Data":"ec37ac61796db0aa2d46698a5dc1e3409733458b2e1d2b8380e7119999b82a8d"} Oct 02 11:40:54 crc kubenswrapper[4725]: I1002 11:40:54.284461 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz" event={"ID":"36df4296-bcd2-4c2a-b6f7-eef03e21d934","Type":"ContainerStarted","Data":"041c21fb8e937c54feab32f727b2546da30358a47bce424c470deb60078496d9"} Oct 02 11:40:54 crc kubenswrapper[4725]: I1002 11:40:54.286225 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" event={"ID":"7ff10093-4a58-4838-88a7-cb77f8ae577a","Type":"ContainerStarted","Data":"cccc61ec41fce2ad0392a8585e43f04f2404412048721dd517353c75095c1648"} Oct 02 11:40:55 crc kubenswrapper[4725]: I1002 11:40:55.293118 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75f4c4c495-jbg5v" event={"ID":"7380b8b7-eb7c-43ba-97c0-69b4f0ac4b22","Type":"ContainerStarted","Data":"0b90b014b88eaabb47e60e1a4facfe9447801dcf5e07d9e46f2c67fe6a0ee447"} Oct 02 11:40:55 crc kubenswrapper[4725]: I1002 11:40:55.309014 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-75f4c4c495-jbg5v" podStartSLOduration=2.308999 podStartE2EDuration="2.308999s" podCreationTimestamp="2025-10-02 11:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:40:55.307964323 +0000 UTC m=+775.215463786" watchObservedRunningTime="2025-10-02 11:40:55.308999 +0000 UTC m=+775.216498453" Oct 02 11:40:57 crc kubenswrapper[4725]: I1002 11:40:57.305745 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz" event={"ID":"36df4296-bcd2-4c2a-b6f7-eef03e21d934","Type":"ContainerStarted","Data":"706a0e001ed17b75d175e08b8d0bc28bc82f1182e5646182b0c51578cbe6d4a5"} Oct 02 11:40:57 crc kubenswrapper[4725]: I1002 11:40:57.308947 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" event={"ID":"7ff10093-4a58-4838-88a7-cb77f8ae577a","Type":"ContainerStarted","Data":"3314c7874fdd25de7eb9d927eed87da56670ee6ea72c4ccdc04b6d872ebe62f2"} Oct 02 11:40:57 crc kubenswrapper[4725]: I1002 11:40:57.309116 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" Oct 02 11:40:57 crc kubenswrapper[4725]: I1002 11:40:57.310601 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" event={"ID":"76aebb91-1906-47b5-8efc-f4e8290b9ffb","Type":"ContainerStarted","Data":"5e456a5f73c2c1653028070321cec38d2cd4512a08b5052eb8c2deb6b3d7c957"} Oct 02 11:40:57 crc kubenswrapper[4725]: I1002 11:40:57.333320 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" podStartSLOduration=1.987481831 podStartE2EDuration="5.333305489s" podCreationTimestamp="2025-10-02 11:40:52 +0000 UTC" firstStartedPulling="2025-10-02 11:40:53.579586861 +0000 UTC m=+773.487086324" lastFinishedPulling="2025-10-02 11:40:56.925410479 +0000 UTC m=+776.832909982" observedRunningTime="2025-10-02 11:40:57.331623586 +0000 UTC m=+777.239123049" watchObservedRunningTime="2025-10-02 11:40:57.333305489 +0000 UTC m=+777.240804952" Oct 02 11:40:57 crc kubenswrapper[4725]: I1002 11:40:57.349027 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bm7x4" podStartSLOduration=2.096429439 podStartE2EDuration="5.349005318s" podCreationTimestamp="2025-10-02 11:40:52 +0000 UTC" firstStartedPulling="2025-10-02 11:40:53.673294072 +0000 UTC m=+773.580793535" lastFinishedPulling="2025-10-02 11:40:56.925869921 +0000 UTC m=+776.833369414" observedRunningTime="2025-10-02 11:40:57.348029334 +0000 UTC m=+777.255528797" watchObservedRunningTime="2025-10-02 11:40:57.349005318 +0000 UTC m=+777.256504781" Oct 02 11:40:58 crc kubenswrapper[4725]: I1002 11:40:58.318879 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mg76p" event={"ID":"7d2d0930-f603-4f33-9da1-f7d372d70912","Type":"ContainerStarted","Data":"50eaeb2e104fc4b032d73e464376d5ec3b7357d1b50a832899820918ca326cf5"} Oct 02 11:40:58 crc kubenswrapper[4725]: I1002 11:40:58.337867 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mg76p" podStartSLOduration=2.507506234 podStartE2EDuration="6.337844445s" podCreationTimestamp="2025-10-02 11:40:52 +0000 UTC" firstStartedPulling="2025-10-02 11:40:53.131107146 +0000 UTC m=+773.038606609" lastFinishedPulling="2025-10-02 11:40:56.961445347 +0000 UTC m=+776.868944820" observedRunningTime="2025-10-02 11:40:58.335130657 +0000 UTC m=+778.242630140" watchObservedRunningTime="2025-10-02 11:40:58.337844445 +0000 UTC m=+778.245343928" Oct 02 11:40:59 crc kubenswrapper[4725]: I1002 11:40:59.324789 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:41:00 crc kubenswrapper[4725]: I1002 11:41:00.330954 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz" event={"ID":"36df4296-bcd2-4c2a-b6f7-eef03e21d934","Type":"ContainerStarted","Data":"7862b2323fb3052e60dd26c0e92ffcc37e408d9c68a8bd2a89b721573654becf"} Oct 02 11:41:00 crc kubenswrapper[4725]: I1002 11:41:00.348765 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-xgxbz" podStartSLOduration=2.056450562 podStartE2EDuration="8.348750847s" podCreationTimestamp="2025-10-02 11:40:52 +0000 UTC" firstStartedPulling="2025-10-02 11:40:53.50202021 +0000 UTC m=+773.409519673" lastFinishedPulling="2025-10-02 11:40:59.794320495 +0000 UTC m=+779.701819958" observedRunningTime="2025-10-02 11:41:00.348470689 +0000 UTC m=+780.255970162" watchObservedRunningTime="2025-10-02 11:41:00.348750847 +0000 UTC m=+780.256250310" Oct 02 11:41:03 crc kubenswrapper[4725]: I1002 11:41:03.105646 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mg76p" Oct 02 11:41:03 crc kubenswrapper[4725]: I1002 11:41:03.842323 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:41:03 crc kubenswrapper[4725]: I1002 11:41:03.842435 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:41:03 crc kubenswrapper[4725]: I1002 11:41:03.848373 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:41:04 crc kubenswrapper[4725]: I1002 11:41:04.364821 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-75f4c4c495-jbg5v" Oct 02 11:41:04 crc kubenswrapper[4725]: I1002 11:41:04.426594 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kq4vt"] Oct 02 11:41:13 crc kubenswrapper[4725]: I1002 11:41:13.095945 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-c8s4z" Oct 02 11:41:14 crc kubenswrapper[4725]: I1002 11:41:14.978349 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:41:14 crc kubenswrapper[4725]: I1002 11:41:14.978738 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:41:26 crc kubenswrapper[4725]: I1002 11:41:26.928661 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598"] Oct 02 11:41:26 crc kubenswrapper[4725]: I1002 11:41:26.930394 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:26 crc kubenswrapper[4725]: I1002 11:41:26.932811 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 02 11:41:26 crc kubenswrapper[4725]: I1002 11:41:26.947180 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598"] Oct 02 11:41:27 crc kubenswrapper[4725]: I1002 11:41:27.093190 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:27 crc kubenswrapper[4725]: I1002 11:41:27.093255 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4gvq\" (UniqueName: \"kubernetes.io/projected/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-kube-api-access-x4gvq\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:27 crc kubenswrapper[4725]: I1002 11:41:27.093308 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:27 crc kubenswrapper[4725]: I1002 11:41:27.194948 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:27 crc kubenswrapper[4725]: I1002 11:41:27.195005 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4gvq\" (UniqueName: \"kubernetes.io/projected/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-kube-api-access-x4gvq\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:27 crc kubenswrapper[4725]: I1002 11:41:27.195048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:27 crc kubenswrapper[4725]: I1002 11:41:27.195583 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:27 crc kubenswrapper[4725]: I1002 11:41:27.195583 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:27 crc kubenswrapper[4725]: I1002 11:41:27.217050 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4gvq\" (UniqueName: \"kubernetes.io/projected/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-kube-api-access-x4gvq\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:27 crc kubenswrapper[4725]: I1002 11:41:27.265598 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:27 crc kubenswrapper[4725]: I1002 11:41:27.758565 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598"] Oct 02 11:41:27 crc kubenswrapper[4725]: W1002 11:41:27.762284 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ca3b9f7_62cf_4bd9_807d_e9ab02d98327.slice/crio-dd3fa1298181c9fcdb539a844c32f9d6e14cd507918b09ddef041c61037ebed4 WatchSource:0}: Error finding container dd3fa1298181c9fcdb539a844c32f9d6e14cd507918b09ddef041c61037ebed4: Status 404 returned error can't find the container with id dd3fa1298181c9fcdb539a844c32f9d6e14cd507918b09ddef041c61037ebed4 Oct 02 11:41:28 crc kubenswrapper[4725]: I1002 11:41:28.508696 4725 generic.go:334] "Generic (PLEG): container finished" podID="3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" containerID="d6b9436ac4be150b5eb60a706ab36385696390955fb3e8d74929f59f5ef13bc7" exitCode=0 Oct 02 11:41:28 crc kubenswrapper[4725]: I1002 11:41:28.509002 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" event={"ID":"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327","Type":"ContainerDied","Data":"d6b9436ac4be150b5eb60a706ab36385696390955fb3e8d74929f59f5ef13bc7"} Oct 02 11:41:28 crc kubenswrapper[4725]: I1002 11:41:28.509241 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" event={"ID":"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327","Type":"ContainerStarted","Data":"dd3fa1298181c9fcdb539a844c32f9d6e14cd507918b09ddef041c61037ebed4"} Oct 02 11:41:29 crc kubenswrapper[4725]: I1002 11:41:29.466343 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-kq4vt" podUID="a7e4bee1-a3eb-4e37-bfcb-99350ce66859" containerName="console" containerID="cri-o://90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf" gracePeriod=15 Oct 02 11:41:29 crc kubenswrapper[4725]: I1002 11:41:29.921617 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kq4vt_a7e4bee1-a3eb-4e37-bfcb-99350ce66859/console/0.log" Oct 02 11:41:29 crc kubenswrapper[4725]: I1002 11:41:29.921904 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.036046 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-trusted-ca-bundle\") pod \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.036320 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-oauth-serving-cert\") pod \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.036389 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-oauth-config\") pod \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.036408 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-config\") pod \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.036441 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzw54\" (UniqueName: \"kubernetes.io/projected/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-kube-api-access-xzw54\") pod \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.036495 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-serving-cert\") pod \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.036538 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-service-ca\") pod \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\" (UID: \"a7e4bee1-a3eb-4e37-bfcb-99350ce66859\") " Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.037104 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a7e4bee1-a3eb-4e37-bfcb-99350ce66859" (UID: "a7e4bee1-a3eb-4e37-bfcb-99350ce66859"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.037133 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-service-ca" (OuterVolumeSpecName: "service-ca") pod "a7e4bee1-a3eb-4e37-bfcb-99350ce66859" (UID: "a7e4bee1-a3eb-4e37-bfcb-99350ce66859"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.037154 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a7e4bee1-a3eb-4e37-bfcb-99350ce66859" (UID: "a7e4bee1-a3eb-4e37-bfcb-99350ce66859"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.037167 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-config" (OuterVolumeSpecName: "console-config") pod "a7e4bee1-a3eb-4e37-bfcb-99350ce66859" (UID: "a7e4bee1-a3eb-4e37-bfcb-99350ce66859"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.038222 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.038273 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.038292 4725 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.038307 4725 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.041814 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a7e4bee1-a3eb-4e37-bfcb-99350ce66859" (UID: "a7e4bee1-a3eb-4e37-bfcb-99350ce66859"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.042435 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a7e4bee1-a3eb-4e37-bfcb-99350ce66859" (UID: "a7e4bee1-a3eb-4e37-bfcb-99350ce66859"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.044473 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-kube-api-access-xzw54" (OuterVolumeSpecName: "kube-api-access-xzw54") pod "a7e4bee1-a3eb-4e37-bfcb-99350ce66859" (UID: "a7e4bee1-a3eb-4e37-bfcb-99350ce66859"). InnerVolumeSpecName "kube-api-access-xzw54". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.139276 4725 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.139315 4725 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.139324 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzw54\" (UniqueName: \"kubernetes.io/projected/a7e4bee1-a3eb-4e37-bfcb-99350ce66859-kube-api-access-xzw54\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.466451 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6xwhq"] Oct 02 11:41:30 crc kubenswrapper[4725]: E1002 11:41:30.466850 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e4bee1-a3eb-4e37-bfcb-99350ce66859" containerName="console" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.466878 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e4bee1-a3eb-4e37-bfcb-99350ce66859" containerName="console" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.467066 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e4bee1-a3eb-4e37-bfcb-99350ce66859" containerName="console" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.468356 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.477105 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xwhq"] Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.527647 4725 generic.go:334] "Generic (PLEG): container finished" podID="3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" containerID="cea9c10c6df20c45297195633eea1197acc855dd25aa9644b2e6b953f359db4e" exitCode=0 Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.527766 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" event={"ID":"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327","Type":"ContainerDied","Data":"cea9c10c6df20c45297195633eea1197acc855dd25aa9644b2e6b953f359db4e"} Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.530775 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kq4vt_a7e4bee1-a3eb-4e37-bfcb-99350ce66859/console/0.log" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.530832 4725 generic.go:334] "Generic (PLEG): container finished" podID="a7e4bee1-a3eb-4e37-bfcb-99350ce66859" containerID="90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf" exitCode=2 Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.530868 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kq4vt" event={"ID":"a7e4bee1-a3eb-4e37-bfcb-99350ce66859","Type":"ContainerDied","Data":"90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf"} Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.530898 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kq4vt" event={"ID":"a7e4bee1-a3eb-4e37-bfcb-99350ce66859","Type":"ContainerDied","Data":"db7345580fdea77b25de58168184bfd26410b194ce8eee5367d1649aac91850c"} Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.530920 4725 scope.go:117] "RemoveContainer" containerID="90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.530925 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kq4vt" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.558204 4725 scope.go:117] "RemoveContainer" containerID="90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf" Oct 02 11:41:30 crc kubenswrapper[4725]: E1002 11:41:30.558785 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf\": container with ID starting with 90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf not found: ID does not exist" containerID="90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.558830 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf"} err="failed to get container status \"90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf\": rpc error: code = NotFound desc = could not find container \"90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf\": container with ID starting with 90d7d582b0aa92f47c669460928f6b674e8d76c0d0f3fb15e73228df9c0a42bf not found: ID does not exist" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.577007 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kq4vt"] Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.577963 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-kq4vt"] Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.644951 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtk4d\" (UniqueName: \"kubernetes.io/projected/473719b9-8261-44f1-a659-5ffe90d1422f-kube-api-access-jtk4d\") pod \"redhat-operators-6xwhq\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.645018 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-utilities\") pod \"redhat-operators-6xwhq\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.645055 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-catalog-content\") pod \"redhat-operators-6xwhq\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.745756 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtk4d\" (UniqueName: \"kubernetes.io/projected/473719b9-8261-44f1-a659-5ffe90d1422f-kube-api-access-jtk4d\") pod \"redhat-operators-6xwhq\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.745809 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-utilities\") pod \"redhat-operators-6xwhq\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.745841 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-catalog-content\") pod \"redhat-operators-6xwhq\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.746341 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-catalog-content\") pod \"redhat-operators-6xwhq\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.746624 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-utilities\") pod \"redhat-operators-6xwhq\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.769168 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtk4d\" (UniqueName: \"kubernetes.io/projected/473719b9-8261-44f1-a659-5ffe90d1422f-kube-api-access-jtk4d\") pod \"redhat-operators-6xwhq\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.795138 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:30 crc kubenswrapper[4725]: I1002 11:41:30.989119 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xwhq"] Oct 02 11:41:31 crc kubenswrapper[4725]: W1002 11:41:31.000712 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod473719b9_8261_44f1_a659_5ffe90d1422f.slice/crio-782f50c470bf22ab995d4038ba7237d822922478c202561acd1c4fb73e2bd036 WatchSource:0}: Error finding container 782f50c470bf22ab995d4038ba7237d822922478c202561acd1c4fb73e2bd036: Status 404 returned error can't find the container with id 782f50c470bf22ab995d4038ba7237d822922478c202561acd1c4fb73e2bd036 Oct 02 11:41:31 crc kubenswrapper[4725]: I1002 11:41:31.276741 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e4bee1-a3eb-4e37-bfcb-99350ce66859" path="/var/lib/kubelet/pods/a7e4bee1-a3eb-4e37-bfcb-99350ce66859/volumes" Oct 02 11:41:31 crc kubenswrapper[4725]: I1002 11:41:31.546780 4725 generic.go:334] "Generic (PLEG): container finished" podID="3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" containerID="b3b4e0482c19d0d7b46d5d80945f4f2e03d5ebdede9f89de8f9c5eccffc6fb3b" exitCode=0 Oct 02 11:41:31 crc kubenswrapper[4725]: I1002 11:41:31.546860 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" event={"ID":"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327","Type":"ContainerDied","Data":"b3b4e0482c19d0d7b46d5d80945f4f2e03d5ebdede9f89de8f9c5eccffc6fb3b"} Oct 02 11:41:31 crc kubenswrapper[4725]: I1002 11:41:31.548972 4725 generic.go:334] "Generic (PLEG): container finished" podID="473719b9-8261-44f1-a659-5ffe90d1422f" containerID="a4d45c6eec46d6a1d5f37d7b2abd4f613b3ec52d480a2b0e824de483a1f27422" exitCode=0 Oct 02 11:41:31 crc kubenswrapper[4725]: I1002 11:41:31.549063 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xwhq" event={"ID":"473719b9-8261-44f1-a659-5ffe90d1422f","Type":"ContainerDied","Data":"a4d45c6eec46d6a1d5f37d7b2abd4f613b3ec52d480a2b0e824de483a1f27422"} Oct 02 11:41:31 crc kubenswrapper[4725]: I1002 11:41:31.549129 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xwhq" event={"ID":"473719b9-8261-44f1-a659-5ffe90d1422f","Type":"ContainerStarted","Data":"782f50c470bf22ab995d4038ba7237d822922478c202561acd1c4fb73e2bd036"} Oct 02 11:41:32 crc kubenswrapper[4725]: I1002 11:41:32.560095 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xwhq" event={"ID":"473719b9-8261-44f1-a659-5ffe90d1422f","Type":"ContainerStarted","Data":"4f0d8e93fb4616290509e69e89eddffc8ddb3ba087bac7bb545c992ec45df0e5"} Oct 02 11:41:32 crc kubenswrapper[4725]: I1002 11:41:32.885511 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:32 crc kubenswrapper[4725]: I1002 11:41:32.970463 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4gvq\" (UniqueName: \"kubernetes.io/projected/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-kube-api-access-x4gvq\") pod \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " Oct 02 11:41:32 crc kubenswrapper[4725]: I1002 11:41:32.970640 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-util\") pod \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " Oct 02 11:41:32 crc kubenswrapper[4725]: I1002 11:41:32.970675 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-bundle\") pod \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\" (UID: \"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327\") " Oct 02 11:41:32 crc kubenswrapper[4725]: I1002 11:41:32.972028 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-bundle" (OuterVolumeSpecName: "bundle") pod "3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" (UID: "3ca3b9f7-62cf-4bd9-807d-e9ab02d98327"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:41:32 crc kubenswrapper[4725]: I1002 11:41:32.986150 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-kube-api-access-x4gvq" (OuterVolumeSpecName: "kube-api-access-x4gvq") pod "3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" (UID: "3ca3b9f7-62cf-4bd9-807d-e9ab02d98327"). InnerVolumeSpecName "kube-api-access-x4gvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:41:32 crc kubenswrapper[4725]: I1002 11:41:32.986270 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-util" (OuterVolumeSpecName: "util") pod "3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" (UID: "3ca3b9f7-62cf-4bd9-807d-e9ab02d98327"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:41:33 crc kubenswrapper[4725]: I1002 11:41:33.071853 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-util\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:33 crc kubenswrapper[4725]: I1002 11:41:33.071901 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:33 crc kubenswrapper[4725]: I1002 11:41:33.071913 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4gvq\" (UniqueName: \"kubernetes.io/projected/3ca3b9f7-62cf-4bd9-807d-e9ab02d98327-kube-api-access-x4gvq\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:33 crc kubenswrapper[4725]: I1002 11:41:33.566862 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" event={"ID":"3ca3b9f7-62cf-4bd9-807d-e9ab02d98327","Type":"ContainerDied","Data":"dd3fa1298181c9fcdb539a844c32f9d6e14cd507918b09ddef041c61037ebed4"} Oct 02 11:41:33 crc kubenswrapper[4725]: I1002 11:41:33.567199 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd3fa1298181c9fcdb539a844c32f9d6e14cd507918b09ddef041c61037ebed4" Oct 02 11:41:33 crc kubenswrapper[4725]: I1002 11:41:33.566874 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598" Oct 02 11:41:33 crc kubenswrapper[4725]: I1002 11:41:33.569480 4725 generic.go:334] "Generic (PLEG): container finished" podID="473719b9-8261-44f1-a659-5ffe90d1422f" containerID="4f0d8e93fb4616290509e69e89eddffc8ddb3ba087bac7bb545c992ec45df0e5" exitCode=0 Oct 02 11:41:33 crc kubenswrapper[4725]: I1002 11:41:33.569547 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xwhq" event={"ID":"473719b9-8261-44f1-a659-5ffe90d1422f","Type":"ContainerDied","Data":"4f0d8e93fb4616290509e69e89eddffc8ddb3ba087bac7bb545c992ec45df0e5"} Oct 02 11:41:34 crc kubenswrapper[4725]: I1002 11:41:34.576490 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xwhq" event={"ID":"473719b9-8261-44f1-a659-5ffe90d1422f","Type":"ContainerStarted","Data":"8c1007984c259792c667381889346447a39c57b826a5ec91be5de5292c9e986d"} Oct 02 11:41:40 crc kubenswrapper[4725]: I1002 11:41:40.795876 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:40 crc kubenswrapper[4725]: I1002 11:41:40.796284 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:40 crc kubenswrapper[4725]: I1002 11:41:40.846678 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:40 crc kubenswrapper[4725]: I1002 11:41:40.868387 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6xwhq" podStartSLOduration=8.432874605 podStartE2EDuration="10.868368702s" podCreationTimestamp="2025-10-02 11:41:30 +0000 UTC" firstStartedPulling="2025-10-02 11:41:31.551903728 +0000 UTC m=+811.459403191" lastFinishedPulling="2025-10-02 11:41:33.987397785 +0000 UTC m=+813.894897288" observedRunningTime="2025-10-02 11:41:34.597559337 +0000 UTC m=+814.505058800" watchObservedRunningTime="2025-10-02 11:41:40.868368702 +0000 UTC m=+820.775868165" Oct 02 11:41:41 crc kubenswrapper[4725]: I1002 11:41:41.652376 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.251473 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xwhq"] Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.251954 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6xwhq" podUID="473719b9-8261-44f1-a659-5ffe90d1422f" containerName="registry-server" containerID="cri-o://8c1007984c259792c667381889346447a39c57b826a5ec91be5de5292c9e986d" gracePeriod=2 Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.456818 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl"] Oct 02 11:41:44 crc kubenswrapper[4725]: E1002 11:41:44.457020 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" containerName="pull" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.457032 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" containerName="pull" Oct 02 11:41:44 crc kubenswrapper[4725]: E1002 11:41:44.457041 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" containerName="util" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.457047 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" containerName="util" Oct 02 11:41:44 crc kubenswrapper[4725]: E1002 11:41:44.457059 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" containerName="extract" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.457065 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" containerName="extract" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.457158 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ca3b9f7-62cf-4bd9-807d-e9ab02d98327" containerName="extract" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.457504 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.459459 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.459468 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-x5mkc" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.460279 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.460995 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.463955 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.474925 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl"] Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.614595 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ceb5035d-4044-42f3-be35-b3f861ba059c-apiservice-cert\") pod \"metallb-operator-controller-manager-8cc8c8574-jxrnl\" (UID: \"ceb5035d-4044-42f3-be35-b3f861ba059c\") " pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.614640 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ceb5035d-4044-42f3-be35-b3f861ba059c-webhook-cert\") pod \"metallb-operator-controller-manager-8cc8c8574-jxrnl\" (UID: \"ceb5035d-4044-42f3-be35-b3f861ba059c\") " pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.614674 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnj4n\" (UniqueName: \"kubernetes.io/projected/ceb5035d-4044-42f3-be35-b3f861ba059c-kube-api-access-dnj4n\") pod \"metallb-operator-controller-manager-8cc8c8574-jxrnl\" (UID: \"ceb5035d-4044-42f3-be35-b3f861ba059c\") " pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.632164 4725 generic.go:334] "Generic (PLEG): container finished" podID="473719b9-8261-44f1-a659-5ffe90d1422f" containerID="8c1007984c259792c667381889346447a39c57b826a5ec91be5de5292c9e986d" exitCode=0 Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.632201 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xwhq" event={"ID":"473719b9-8261-44f1-a659-5ffe90d1422f","Type":"ContainerDied","Data":"8c1007984c259792c667381889346447a39c57b826a5ec91be5de5292c9e986d"} Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.715777 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ceb5035d-4044-42f3-be35-b3f861ba059c-apiservice-cert\") pod \"metallb-operator-controller-manager-8cc8c8574-jxrnl\" (UID: \"ceb5035d-4044-42f3-be35-b3f861ba059c\") " pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.715818 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ceb5035d-4044-42f3-be35-b3f861ba059c-webhook-cert\") pod \"metallb-operator-controller-manager-8cc8c8574-jxrnl\" (UID: \"ceb5035d-4044-42f3-be35-b3f861ba059c\") " pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.715872 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnj4n\" (UniqueName: \"kubernetes.io/projected/ceb5035d-4044-42f3-be35-b3f861ba059c-kube-api-access-dnj4n\") pod \"metallb-operator-controller-manager-8cc8c8574-jxrnl\" (UID: \"ceb5035d-4044-42f3-be35-b3f861ba059c\") " pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.724553 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ceb5035d-4044-42f3-be35-b3f861ba059c-webhook-cert\") pod \"metallb-operator-controller-manager-8cc8c8574-jxrnl\" (UID: \"ceb5035d-4044-42f3-be35-b3f861ba059c\") " pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.730511 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ceb5035d-4044-42f3-be35-b3f861ba059c-apiservice-cert\") pod \"metallb-operator-controller-manager-8cc8c8574-jxrnl\" (UID: \"ceb5035d-4044-42f3-be35-b3f861ba059c\") " pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.733260 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2"] Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.733913 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.735389 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.735535 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.743087 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-77qrr" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.745458 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2"] Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.762922 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnj4n\" (UniqueName: \"kubernetes.io/projected/ceb5035d-4044-42f3-be35-b3f861ba059c-kube-api-access-dnj4n\") pod \"metallb-operator-controller-manager-8cc8c8574-jxrnl\" (UID: \"ceb5035d-4044-42f3-be35-b3f861ba059c\") " pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.776049 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.818556 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6cada9d9-3b84-4015-b1c9-7bf3de0debbb-apiservice-cert\") pod \"metallb-operator-webhook-server-757dfd7686-7m9m2\" (UID: \"6cada9d9-3b84-4015-b1c9-7bf3de0debbb\") " pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.818950 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6cada9d9-3b84-4015-b1c9-7bf3de0debbb-webhook-cert\") pod \"metallb-operator-webhook-server-757dfd7686-7m9m2\" (UID: \"6cada9d9-3b84-4015-b1c9-7bf3de0debbb\") " pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.818972 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9vm6\" (UniqueName: \"kubernetes.io/projected/6cada9d9-3b84-4015-b1c9-7bf3de0debbb-kube-api-access-m9vm6\") pod \"metallb-operator-webhook-server-757dfd7686-7m9m2\" (UID: \"6cada9d9-3b84-4015-b1c9-7bf3de0debbb\") " pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.920564 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6cada9d9-3b84-4015-b1c9-7bf3de0debbb-webhook-cert\") pod \"metallb-operator-webhook-server-757dfd7686-7m9m2\" (UID: \"6cada9d9-3b84-4015-b1c9-7bf3de0debbb\") " pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.920612 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9vm6\" (UniqueName: \"kubernetes.io/projected/6cada9d9-3b84-4015-b1c9-7bf3de0debbb-kube-api-access-m9vm6\") pod \"metallb-operator-webhook-server-757dfd7686-7m9m2\" (UID: \"6cada9d9-3b84-4015-b1c9-7bf3de0debbb\") " pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.920639 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6cada9d9-3b84-4015-b1c9-7bf3de0debbb-apiservice-cert\") pod \"metallb-operator-webhook-server-757dfd7686-7m9m2\" (UID: \"6cada9d9-3b84-4015-b1c9-7bf3de0debbb\") " pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.925528 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6cada9d9-3b84-4015-b1c9-7bf3de0debbb-apiservice-cert\") pod \"metallb-operator-webhook-server-757dfd7686-7m9m2\" (UID: \"6cada9d9-3b84-4015-b1c9-7bf3de0debbb\") " pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.928571 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6cada9d9-3b84-4015-b1c9-7bf3de0debbb-webhook-cert\") pod \"metallb-operator-webhook-server-757dfd7686-7m9m2\" (UID: \"6cada9d9-3b84-4015-b1c9-7bf3de0debbb\") " pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.952462 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9vm6\" (UniqueName: \"kubernetes.io/projected/6cada9d9-3b84-4015-b1c9-7bf3de0debbb-kube-api-access-m9vm6\") pod \"metallb-operator-webhook-server-757dfd7686-7m9m2\" (UID: \"6cada9d9-3b84-4015-b1c9-7bf3de0debbb\") " pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.978573 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:41:44 crc kubenswrapper[4725]: I1002 11:41:44.978632 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.010174 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl"] Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.129804 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.153964 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.223677 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-utilities\") pod \"473719b9-8261-44f1-a659-5ffe90d1422f\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.223743 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtk4d\" (UniqueName: \"kubernetes.io/projected/473719b9-8261-44f1-a659-5ffe90d1422f-kube-api-access-jtk4d\") pod \"473719b9-8261-44f1-a659-5ffe90d1422f\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.223809 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-catalog-content\") pod \"473719b9-8261-44f1-a659-5ffe90d1422f\" (UID: \"473719b9-8261-44f1-a659-5ffe90d1422f\") " Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.224740 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-utilities" (OuterVolumeSpecName: "utilities") pod "473719b9-8261-44f1-a659-5ffe90d1422f" (UID: "473719b9-8261-44f1-a659-5ffe90d1422f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.227713 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473719b9-8261-44f1-a659-5ffe90d1422f-kube-api-access-jtk4d" (OuterVolumeSpecName: "kube-api-access-jtk4d") pod "473719b9-8261-44f1-a659-5ffe90d1422f" (UID: "473719b9-8261-44f1-a659-5ffe90d1422f"). InnerVolumeSpecName "kube-api-access-jtk4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.317913 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "473719b9-8261-44f1-a659-5ffe90d1422f" (UID: "473719b9-8261-44f1-a659-5ffe90d1422f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.318167 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2"] Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.325213 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.325247 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtk4d\" (UniqueName: \"kubernetes.io/projected/473719b9-8261-44f1-a659-5ffe90d1422f-kube-api-access-jtk4d\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.325263 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/473719b9-8261-44f1-a659-5ffe90d1422f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:41:45 crc kubenswrapper[4725]: W1002 11:41:45.325551 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cada9d9_3b84_4015_b1c9_7bf3de0debbb.slice/crio-e2e57158d8d18458d69718cc6f1196ffcd167afc8778d7138a143e4fedd86bcf WatchSource:0}: Error finding container e2e57158d8d18458d69718cc6f1196ffcd167afc8778d7138a143e4fedd86bcf: Status 404 returned error can't find the container with id e2e57158d8d18458d69718cc6f1196ffcd167afc8778d7138a143e4fedd86bcf Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.637936 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" event={"ID":"6cada9d9-3b84-4015-b1c9-7bf3de0debbb","Type":"ContainerStarted","Data":"e2e57158d8d18458d69718cc6f1196ffcd167afc8778d7138a143e4fedd86bcf"} Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.639750 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xwhq" event={"ID":"473719b9-8261-44f1-a659-5ffe90d1422f","Type":"ContainerDied","Data":"782f50c470bf22ab995d4038ba7237d822922478c202561acd1c4fb73e2bd036"} Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.639780 4725 scope.go:117] "RemoveContainer" containerID="8c1007984c259792c667381889346447a39c57b826a5ec91be5de5292c9e986d" Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.639782 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xwhq" Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.641604 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" event={"ID":"ceb5035d-4044-42f3-be35-b3f861ba059c","Type":"ContainerStarted","Data":"373ccf580ffeb6dde99a7c33a9a0fea4208986161585851461e2ce7bef49047a"} Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.655405 4725 scope.go:117] "RemoveContainer" containerID="4f0d8e93fb4616290509e69e89eddffc8ddb3ba087bac7bb545c992ec45df0e5" Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.663563 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xwhq"] Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.666403 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6xwhq"] Oct 02 11:41:45 crc kubenswrapper[4725]: I1002 11:41:45.680802 4725 scope.go:117] "RemoveContainer" containerID="a4d45c6eec46d6a1d5f37d7b2abd4f613b3ec52d480a2b0e824de483a1f27422" Oct 02 11:41:47 crc kubenswrapper[4725]: I1002 11:41:47.275386 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="473719b9-8261-44f1-a659-5ffe90d1422f" path="/var/lib/kubelet/pods/473719b9-8261-44f1-a659-5ffe90d1422f/volumes" Oct 02 11:41:48 crc kubenswrapper[4725]: I1002 11:41:48.659495 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" event={"ID":"ceb5035d-4044-42f3-be35-b3f861ba059c","Type":"ContainerStarted","Data":"f378321ff1d3b6a85e05d1a01bc2b634c7799d53e6227116404075c3d1461b83"} Oct 02 11:41:48 crc kubenswrapper[4725]: I1002 11:41:48.660092 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:41:48 crc kubenswrapper[4725]: I1002 11:41:48.692551 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" podStartSLOduration=1.471618123 podStartE2EDuration="4.692526334s" podCreationTimestamp="2025-10-02 11:41:44 +0000 UTC" firstStartedPulling="2025-10-02 11:41:45.024383852 +0000 UTC m=+824.931883315" lastFinishedPulling="2025-10-02 11:41:48.245292053 +0000 UTC m=+828.152791526" observedRunningTime="2025-10-02 11:41:48.687337912 +0000 UTC m=+828.594837385" watchObservedRunningTime="2025-10-02 11:41:48.692526334 +0000 UTC m=+828.600025797" Oct 02 11:41:50 crc kubenswrapper[4725]: I1002 11:41:50.674009 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" event={"ID":"6cada9d9-3b84-4015-b1c9-7bf3de0debbb","Type":"ContainerStarted","Data":"11eff3647b3e3d5274096bfcc1295ce3bc7330187dc05fcef9b76549633f91ae"} Oct 02 11:41:50 crc kubenswrapper[4725]: I1002 11:41:50.674181 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:41:50 crc kubenswrapper[4725]: I1002 11:41:50.713121 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" podStartSLOduration=2.197094753 podStartE2EDuration="6.713094961s" podCreationTimestamp="2025-10-02 11:41:44 +0000 UTC" firstStartedPulling="2025-10-02 11:41:45.328878178 +0000 UTC m=+825.236377651" lastFinishedPulling="2025-10-02 11:41:49.844878396 +0000 UTC m=+829.752377859" observedRunningTime="2025-10-02 11:41:50.704236825 +0000 UTC m=+830.611736288" watchObservedRunningTime="2025-10-02 11:41:50.713094961 +0000 UTC m=+830.620594454" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.060497 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l47bc"] Oct 02 11:41:57 crc kubenswrapper[4725]: E1002 11:41:57.061274 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473719b9-8261-44f1-a659-5ffe90d1422f" containerName="registry-server" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.061289 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="473719b9-8261-44f1-a659-5ffe90d1422f" containerName="registry-server" Oct 02 11:41:57 crc kubenswrapper[4725]: E1002 11:41:57.061307 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473719b9-8261-44f1-a659-5ffe90d1422f" containerName="extract-content" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.061314 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="473719b9-8261-44f1-a659-5ffe90d1422f" containerName="extract-content" Oct 02 11:41:57 crc kubenswrapper[4725]: E1002 11:41:57.061325 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473719b9-8261-44f1-a659-5ffe90d1422f" containerName="extract-utilities" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.061334 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="473719b9-8261-44f1-a659-5ffe90d1422f" containerName="extract-utilities" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.061451 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="473719b9-8261-44f1-a659-5ffe90d1422f" containerName="registry-server" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.062354 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.077282 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l47bc"] Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.187806 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-catalog-content\") pod \"certified-operators-l47bc\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.188037 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt6r9\" (UniqueName: \"kubernetes.io/projected/4032af10-56af-489b-9eee-95ae518843b4-kube-api-access-kt6r9\") pod \"certified-operators-l47bc\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.188105 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-utilities\") pod \"certified-operators-l47bc\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.289616 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt6r9\" (UniqueName: \"kubernetes.io/projected/4032af10-56af-489b-9eee-95ae518843b4-kube-api-access-kt6r9\") pod \"certified-operators-l47bc\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.289670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-utilities\") pod \"certified-operators-l47bc\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.289758 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-catalog-content\") pod \"certified-operators-l47bc\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.290284 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-catalog-content\") pod \"certified-operators-l47bc\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.290443 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-utilities\") pod \"certified-operators-l47bc\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.308323 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt6r9\" (UniqueName: \"kubernetes.io/projected/4032af10-56af-489b-9eee-95ae518843b4-kube-api-access-kt6r9\") pod \"certified-operators-l47bc\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.379022 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:41:57 crc kubenswrapper[4725]: I1002 11:41:57.843126 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l47bc"] Oct 02 11:41:58 crc kubenswrapper[4725]: I1002 11:41:58.725022 4725 generic.go:334] "Generic (PLEG): container finished" podID="4032af10-56af-489b-9eee-95ae518843b4" containerID="2a89fe5403b085afa8124327d2a4e213953df62bf0a9b6730d47e518b0d70b98" exitCode=0 Oct 02 11:41:58 crc kubenswrapper[4725]: I1002 11:41:58.725066 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l47bc" event={"ID":"4032af10-56af-489b-9eee-95ae518843b4","Type":"ContainerDied","Data":"2a89fe5403b085afa8124327d2a4e213953df62bf0a9b6730d47e518b0d70b98"} Oct 02 11:41:58 crc kubenswrapper[4725]: I1002 11:41:58.725090 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l47bc" event={"ID":"4032af10-56af-489b-9eee-95ae518843b4","Type":"ContainerStarted","Data":"e35acaafcaab60172ff24d78e370ab119b95e3063aa52bd9ddd41200c0a99da0"} Oct 02 11:41:59 crc kubenswrapper[4725]: I1002 11:41:59.732081 4725 generic.go:334] "Generic (PLEG): container finished" podID="4032af10-56af-489b-9eee-95ae518843b4" containerID="20a4480e69d9b209c2655da43bc879607ca22d5c917f8e56eacc9be2ceeca663" exitCode=0 Oct 02 11:41:59 crc kubenswrapper[4725]: I1002 11:41:59.732137 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l47bc" event={"ID":"4032af10-56af-489b-9eee-95ae518843b4","Type":"ContainerDied","Data":"20a4480e69d9b209c2655da43bc879607ca22d5c917f8e56eacc9be2ceeca663"} Oct 02 11:42:00 crc kubenswrapper[4725]: I1002 11:42:00.739858 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l47bc" event={"ID":"4032af10-56af-489b-9eee-95ae518843b4","Type":"ContainerStarted","Data":"46427544533f778ad918703b407bb74c8ea6b43957810f73b76178b6b2e66be2"} Oct 02 11:42:00 crc kubenswrapper[4725]: I1002 11:42:00.760364 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l47bc" podStartSLOduration=2.295530094 podStartE2EDuration="3.760350086s" podCreationTimestamp="2025-10-02 11:41:57 +0000 UTC" firstStartedPulling="2025-10-02 11:41:58.727409364 +0000 UTC m=+838.634908837" lastFinishedPulling="2025-10-02 11:42:00.192229366 +0000 UTC m=+840.099728829" observedRunningTime="2025-10-02 11:42:00.753884572 +0000 UTC m=+840.661384055" watchObservedRunningTime="2025-10-02 11:42:00.760350086 +0000 UTC m=+840.667849549" Oct 02 11:42:05 crc kubenswrapper[4725]: I1002 11:42:05.138128 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-757dfd7686-7m9m2" Oct 02 11:42:07 crc kubenswrapper[4725]: I1002 11:42:07.379130 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:42:07 crc kubenswrapper[4725]: I1002 11:42:07.379430 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:42:07 crc kubenswrapper[4725]: I1002 11:42:07.420248 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:42:07 crc kubenswrapper[4725]: I1002 11:42:07.809896 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:42:10 crc kubenswrapper[4725]: I1002 11:42:10.853249 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l47bc"] Oct 02 11:42:10 crc kubenswrapper[4725]: I1002 11:42:10.853929 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l47bc" podUID="4032af10-56af-489b-9eee-95ae518843b4" containerName="registry-server" containerID="cri-o://46427544533f778ad918703b407bb74c8ea6b43957810f73b76178b6b2e66be2" gracePeriod=2 Oct 02 11:42:11 crc kubenswrapper[4725]: I1002 11:42:11.797374 4725 generic.go:334] "Generic (PLEG): container finished" podID="4032af10-56af-489b-9eee-95ae518843b4" containerID="46427544533f778ad918703b407bb74c8ea6b43957810f73b76178b6b2e66be2" exitCode=0 Oct 02 11:42:11 crc kubenswrapper[4725]: I1002 11:42:11.797716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l47bc" event={"ID":"4032af10-56af-489b-9eee-95ae518843b4","Type":"ContainerDied","Data":"46427544533f778ad918703b407bb74c8ea6b43957810f73b76178b6b2e66be2"} Oct 02 11:42:11 crc kubenswrapper[4725]: I1002 11:42:11.872959 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:42:11 crc kubenswrapper[4725]: I1002 11:42:11.999152 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-utilities\") pod \"4032af10-56af-489b-9eee-95ae518843b4\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " Oct 02 11:42:11 crc kubenswrapper[4725]: I1002 11:42:11.999564 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt6r9\" (UniqueName: \"kubernetes.io/projected/4032af10-56af-489b-9eee-95ae518843b4-kube-api-access-kt6r9\") pod \"4032af10-56af-489b-9eee-95ae518843b4\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.000142 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-catalog-content\") pod \"4032af10-56af-489b-9eee-95ae518843b4\" (UID: \"4032af10-56af-489b-9eee-95ae518843b4\") " Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.000476 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-utilities" (OuterVolumeSpecName: "utilities") pod "4032af10-56af-489b-9eee-95ae518843b4" (UID: "4032af10-56af-489b-9eee-95ae518843b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.003208 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.005929 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4032af10-56af-489b-9eee-95ae518843b4-kube-api-access-kt6r9" (OuterVolumeSpecName: "kube-api-access-kt6r9") pod "4032af10-56af-489b-9eee-95ae518843b4" (UID: "4032af10-56af-489b-9eee-95ae518843b4"). InnerVolumeSpecName "kube-api-access-kt6r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.056648 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4032af10-56af-489b-9eee-95ae518843b4" (UID: "4032af10-56af-489b-9eee-95ae518843b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.104630 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4032af10-56af-489b-9eee-95ae518843b4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.104668 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt6r9\" (UniqueName: \"kubernetes.io/projected/4032af10-56af-489b-9eee-95ae518843b4-kube-api-access-kt6r9\") on node \"crc\" DevicePath \"\"" Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.807144 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l47bc" event={"ID":"4032af10-56af-489b-9eee-95ae518843b4","Type":"ContainerDied","Data":"e35acaafcaab60172ff24d78e370ab119b95e3063aa52bd9ddd41200c0a99da0"} Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.807216 4725 scope.go:117] "RemoveContainer" containerID="46427544533f778ad918703b407bb74c8ea6b43957810f73b76178b6b2e66be2" Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.807168 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l47bc" Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.828981 4725 scope.go:117] "RemoveContainer" containerID="20a4480e69d9b209c2655da43bc879607ca22d5c917f8e56eacc9be2ceeca663" Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.845155 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l47bc"] Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.851133 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l47bc"] Oct 02 11:42:12 crc kubenswrapper[4725]: I1002 11:42:12.870875 4725 scope.go:117] "RemoveContainer" containerID="2a89fe5403b085afa8124327d2a4e213953df62bf0a9b6730d47e518b0d70b98" Oct 02 11:42:13 crc kubenswrapper[4725]: I1002 11:42:13.275057 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4032af10-56af-489b-9eee-95ae518843b4" path="/var/lib/kubelet/pods/4032af10-56af-489b-9eee-95ae518843b4/volumes" Oct 02 11:42:14 crc kubenswrapper[4725]: I1002 11:42:14.978809 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:42:14 crc kubenswrapper[4725]: I1002 11:42:14.978896 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:42:14 crc kubenswrapper[4725]: I1002 11:42:14.978959 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:42:14 crc kubenswrapper[4725]: I1002 11:42:14.979669 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c35ab14b7cbe7958f83e521978b2a31e1c8daa5f2440954a4dc746628c5674e"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 11:42:14 crc kubenswrapper[4725]: I1002 11:42:14.979809 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://0c35ab14b7cbe7958f83e521978b2a31e1c8daa5f2440954a4dc746628c5674e" gracePeriod=600 Oct 02 11:42:15 crc kubenswrapper[4725]: I1002 11:42:15.827538 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="0c35ab14b7cbe7958f83e521978b2a31e1c8daa5f2440954a4dc746628c5674e" exitCode=0 Oct 02 11:42:15 crc kubenswrapper[4725]: I1002 11:42:15.827597 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"0c35ab14b7cbe7958f83e521978b2a31e1c8daa5f2440954a4dc746628c5674e"} Oct 02 11:42:15 crc kubenswrapper[4725]: I1002 11:42:15.827849 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"f8e953302b3d29f33ec1243f1fa79e72a6d8548353e0303bbf9f533c0d55729e"} Oct 02 11:42:15 crc kubenswrapper[4725]: I1002 11:42:15.827866 4725 scope.go:117] "RemoveContainer" containerID="2ab06ecdc9e3ce4554f8dc8a83e20bbb77b295847c13387ba2971499da7ed2c3" Oct 02 11:42:24 crc kubenswrapper[4725]: I1002 11:42:24.780047 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8cc8c8574-jxrnl" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.477573 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf"] Oct 02 11:42:25 crc kubenswrapper[4725]: E1002 11:42:25.477908 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4032af10-56af-489b-9eee-95ae518843b4" containerName="extract-content" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.477929 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4032af10-56af-489b-9eee-95ae518843b4" containerName="extract-content" Oct 02 11:42:25 crc kubenswrapper[4725]: E1002 11:42:25.477944 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4032af10-56af-489b-9eee-95ae518843b4" containerName="registry-server" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.477952 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4032af10-56af-489b-9eee-95ae518843b4" containerName="registry-server" Oct 02 11:42:25 crc kubenswrapper[4725]: E1002 11:42:25.477965 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4032af10-56af-489b-9eee-95ae518843b4" containerName="extract-utilities" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.477974 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4032af10-56af-489b-9eee-95ae518843b4" containerName="extract-utilities" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.478091 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4032af10-56af-489b-9eee-95ae518843b4" containerName="registry-server" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.478577 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.480569 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.480893 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bvcxp" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.481716 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-lm5qd"] Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.503638 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.510120 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.513584 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.522094 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf"] Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.577521 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/177207e1-c514-4ece-ab43-249bf5253dd6-metrics-certs\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.577839 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1-cert\") pod \"frr-k8s-webhook-server-64bf5d555-m8jtf\" (UID: \"88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.578019 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/177207e1-c514-4ece-ab43-249bf5253dd6-frr-startup\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.578102 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-metrics\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.578195 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-reloader\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.578289 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-frr-sockets\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.578372 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh8jx\" (UniqueName: \"kubernetes.io/projected/88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1-kube-api-access-hh8jx\") pod \"frr-k8s-webhook-server-64bf5d555-m8jtf\" (UID: \"88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.578524 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-frr-conf\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.578597 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxnmd\" (UniqueName: \"kubernetes.io/projected/177207e1-c514-4ece-ab43-249bf5253dd6-kube-api-access-sxnmd\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.580532 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-l2ln9"] Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.581371 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.583887 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.584085 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rntj4" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.584220 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.584228 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.590526 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-lsjhw"] Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.591333 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.593000 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.613025 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-lsjhw"] Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680088 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-memberlist\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680132 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-frr-conf\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680165 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxnmd\" (UniqueName: \"kubernetes.io/projected/177207e1-c514-4ece-ab43-249bf5253dd6-kube-api-access-sxnmd\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680234 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b130a44e-650b-4940-a3fa-392c5f797d6f-metrics-certs\") pod \"controller-68d546b9d8-lsjhw\" (UID: \"b130a44e-650b-4940-a3fa-392c5f797d6f\") " pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680282 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/177207e1-c514-4ece-ab43-249bf5253dd6-metrics-certs\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680310 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-metrics-certs\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680336 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hfwr\" (UniqueName: \"kubernetes.io/projected/58368f71-69e1-4da4-9a08-0c7c5b093c4d-kube-api-access-2hfwr\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680378 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1-cert\") pod \"frr-k8s-webhook-server-64bf5d555-m8jtf\" (UID: \"88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680406 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/177207e1-c514-4ece-ab43-249bf5253dd6-frr-startup\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680432 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86ln\" (UniqueName: \"kubernetes.io/projected/b130a44e-650b-4940-a3fa-392c5f797d6f-kube-api-access-f86ln\") pod \"controller-68d546b9d8-lsjhw\" (UID: \"b130a44e-650b-4940-a3fa-392c5f797d6f\") " pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680454 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-metrics\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680488 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-reloader\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680520 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/58368f71-69e1-4da4-9a08-0c7c5b093c4d-metallb-excludel2\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680547 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-frr-sockets\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680571 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh8jx\" (UniqueName: \"kubernetes.io/projected/88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1-kube-api-access-hh8jx\") pod \"frr-k8s-webhook-server-64bf5d555-m8jtf\" (UID: \"88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680603 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b130a44e-650b-4940-a3fa-392c5f797d6f-cert\") pod \"controller-68d546b9d8-lsjhw\" (UID: \"b130a44e-650b-4940-a3fa-392c5f797d6f\") " pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.680980 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-frr-conf\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.681131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-metrics\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.681264 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-frr-sockets\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.681751 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/177207e1-c514-4ece-ab43-249bf5253dd6-frr-startup\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.681950 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/177207e1-c514-4ece-ab43-249bf5253dd6-reloader\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.696718 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1-cert\") pod \"frr-k8s-webhook-server-64bf5d555-m8jtf\" (UID: \"88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.698199 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh8jx\" (UniqueName: \"kubernetes.io/projected/88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1-kube-api-access-hh8jx\") pod \"frr-k8s-webhook-server-64bf5d555-m8jtf\" (UID: \"88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.698651 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/177207e1-c514-4ece-ab43-249bf5253dd6-metrics-certs\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.702418 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxnmd\" (UniqueName: \"kubernetes.io/projected/177207e1-c514-4ece-ab43-249bf5253dd6-kube-api-access-sxnmd\") pod \"frr-k8s-lm5qd\" (UID: \"177207e1-c514-4ece-ab43-249bf5253dd6\") " pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.781701 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-memberlist\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.781785 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b130a44e-650b-4940-a3fa-392c5f797d6f-metrics-certs\") pod \"controller-68d546b9d8-lsjhw\" (UID: \"b130a44e-650b-4940-a3fa-392c5f797d6f\") " pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.781814 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-metrics-certs\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.781831 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hfwr\" (UniqueName: \"kubernetes.io/projected/58368f71-69e1-4da4-9a08-0c7c5b093c4d-kube-api-access-2hfwr\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.781854 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f86ln\" (UniqueName: \"kubernetes.io/projected/b130a44e-650b-4940-a3fa-392c5f797d6f-kube-api-access-f86ln\") pod \"controller-68d546b9d8-lsjhw\" (UID: \"b130a44e-650b-4940-a3fa-392c5f797d6f\") " pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.781891 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/58368f71-69e1-4da4-9a08-0c7c5b093c4d-metallb-excludel2\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.781916 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b130a44e-650b-4940-a3fa-392c5f797d6f-cert\") pod \"controller-68d546b9d8-lsjhw\" (UID: \"b130a44e-650b-4940-a3fa-392c5f797d6f\") " pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:25 crc kubenswrapper[4725]: E1002 11:42:25.782026 4725 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 02 11:42:25 crc kubenswrapper[4725]: E1002 11:42:25.782097 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-memberlist podName:58368f71-69e1-4da4-9a08-0c7c5b093c4d nodeName:}" failed. No retries permitted until 2025-10-02 11:42:26.282080094 +0000 UTC m=+866.189579557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-memberlist") pod "speaker-l2ln9" (UID: "58368f71-69e1-4da4-9a08-0c7c5b093c4d") : secret "metallb-memberlist" not found Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.785987 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.786434 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/58368f71-69e1-4da4-9a08-0c7c5b093c4d-metallb-excludel2\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.790329 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-metrics-certs\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.790460 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b130a44e-650b-4940-a3fa-392c5f797d6f-metrics-certs\") pod \"controller-68d546b9d8-lsjhw\" (UID: \"b130a44e-650b-4940-a3fa-392c5f797d6f\") " pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.797087 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b130a44e-650b-4940-a3fa-392c5f797d6f-cert\") pod \"controller-68d546b9d8-lsjhw\" (UID: \"b130a44e-650b-4940-a3fa-392c5f797d6f\") " pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.801862 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hfwr\" (UniqueName: \"kubernetes.io/projected/58368f71-69e1-4da4-9a08-0c7c5b093c4d-kube-api-access-2hfwr\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.808299 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f86ln\" (UniqueName: \"kubernetes.io/projected/b130a44e-650b-4940-a3fa-392c5f797d6f-kube-api-access-f86ln\") pod \"controller-68d546b9d8-lsjhw\" (UID: \"b130a44e-650b-4940-a3fa-392c5f797d6f\") " pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.837949 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.847288 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:25 crc kubenswrapper[4725]: I1002 11:42:25.917556 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:26 crc kubenswrapper[4725]: I1002 11:42:26.150103 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-lsjhw"] Oct 02 11:42:26 crc kubenswrapper[4725]: W1002 11:42:26.151922 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb130a44e_650b_4940_a3fa_392c5f797d6f.slice/crio-a066a29364a1164925e593b012826ff5bf116a45bb8901d3280779916ce1e0b0 WatchSource:0}: Error finding container a066a29364a1164925e593b012826ff5bf116a45bb8901d3280779916ce1e0b0: Status 404 returned error can't find the container with id a066a29364a1164925e593b012826ff5bf116a45bb8901d3280779916ce1e0b0 Oct 02 11:42:26 crc kubenswrapper[4725]: I1002 11:42:26.266667 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf"] Oct 02 11:42:26 crc kubenswrapper[4725]: W1002 11:42:26.275359 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88871cd6_3d0d_4bfb_bf21_f4a6c35d9ac1.slice/crio-43cbf333e3119ce8c09267024f79dedb62994be66193bc8bc2b2e30086277fbc WatchSource:0}: Error finding container 43cbf333e3119ce8c09267024f79dedb62994be66193bc8bc2b2e30086277fbc: Status 404 returned error can't find the container with id 43cbf333e3119ce8c09267024f79dedb62994be66193bc8bc2b2e30086277fbc Oct 02 11:42:26 crc kubenswrapper[4725]: I1002 11:42:26.288301 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-memberlist\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:26 crc kubenswrapper[4725]: E1002 11:42:26.288546 4725 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 02 11:42:26 crc kubenswrapper[4725]: E1002 11:42:26.288653 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-memberlist podName:58368f71-69e1-4da4-9a08-0c7c5b093c4d nodeName:}" failed. No retries permitted until 2025-10-02 11:42:27.288630576 +0000 UTC m=+867.196130039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-memberlist") pod "speaker-l2ln9" (UID: "58368f71-69e1-4da4-9a08-0c7c5b093c4d") : secret "metallb-memberlist" not found Oct 02 11:42:26 crc kubenswrapper[4725]: I1002 11:42:26.893705 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" event={"ID":"88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1","Type":"ContainerStarted","Data":"43cbf333e3119ce8c09267024f79dedb62994be66193bc8bc2b2e30086277fbc"} Oct 02 11:42:26 crc kubenswrapper[4725]: I1002 11:42:26.894799 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm5qd" event={"ID":"177207e1-c514-4ece-ab43-249bf5253dd6","Type":"ContainerStarted","Data":"c63ba1a75889dd56c8ca4b02925070f401c675bc53f8e6a45cd21a7624f3d2d6"} Oct 02 11:42:26 crc kubenswrapper[4725]: I1002 11:42:26.896761 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-lsjhw" event={"ID":"b130a44e-650b-4940-a3fa-392c5f797d6f","Type":"ContainerStarted","Data":"84bbb2417604984bcd828e800b8a880ea97ed0946f4a1e5996735bf7510c76fc"} Oct 02 11:42:26 crc kubenswrapper[4725]: I1002 11:42:26.896802 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-lsjhw" event={"ID":"b130a44e-650b-4940-a3fa-392c5f797d6f","Type":"ContainerStarted","Data":"4eff664ff240707aa7390c5b21b6644a98814ecf0ff17073d97f848afcae0c76"} Oct 02 11:42:26 crc kubenswrapper[4725]: I1002 11:42:26.896815 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-lsjhw" event={"ID":"b130a44e-650b-4940-a3fa-392c5f797d6f","Type":"ContainerStarted","Data":"a066a29364a1164925e593b012826ff5bf116a45bb8901d3280779916ce1e0b0"} Oct 02 11:42:26 crc kubenswrapper[4725]: I1002 11:42:26.896910 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:26 crc kubenswrapper[4725]: I1002 11:42:26.913503 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-lsjhw" podStartSLOduration=1.9134815509999998 podStartE2EDuration="1.913481551s" podCreationTimestamp="2025-10-02 11:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:42:26.909327966 +0000 UTC m=+866.816827439" watchObservedRunningTime="2025-10-02 11:42:26.913481551 +0000 UTC m=+866.820981014" Oct 02 11:42:27 crc kubenswrapper[4725]: I1002 11:42:27.299474 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-memberlist\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:27 crc kubenswrapper[4725]: I1002 11:42:27.308005 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58368f71-69e1-4da4-9a08-0c7c5b093c4d-memberlist\") pod \"speaker-l2ln9\" (UID: \"58368f71-69e1-4da4-9a08-0c7c5b093c4d\") " pod="metallb-system/speaker-l2ln9" Oct 02 11:42:27 crc kubenswrapper[4725]: I1002 11:42:27.408490 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-l2ln9" Oct 02 11:42:27 crc kubenswrapper[4725]: I1002 11:42:27.905530 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l2ln9" event={"ID":"58368f71-69e1-4da4-9a08-0c7c5b093c4d","Type":"ContainerStarted","Data":"f888dd98bd112d7bd65d9defdb0c6322f1aee149944f356abea7dcf06cff736a"} Oct 02 11:42:27 crc kubenswrapper[4725]: I1002 11:42:27.905572 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l2ln9" event={"ID":"58368f71-69e1-4da4-9a08-0c7c5b093c4d","Type":"ContainerStarted","Data":"bce503cfa9915a74c1ab4ed2e6d5c2960cc46be110ed50535614b8d7ed0237c7"} Oct 02 11:42:28 crc kubenswrapper[4725]: I1002 11:42:28.912940 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l2ln9" event={"ID":"58368f71-69e1-4da4-9a08-0c7c5b093c4d","Type":"ContainerStarted","Data":"c86421f7b94d1b3d96adf684398024f10b5ae09464c587182c7bad919e4f0ad8"} Oct 02 11:42:28 crc kubenswrapper[4725]: I1002 11:42:28.913421 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-l2ln9" Oct 02 11:42:28 crc kubenswrapper[4725]: I1002 11:42:28.933856 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-l2ln9" podStartSLOduration=3.933842393 podStartE2EDuration="3.933842393s" podCreationTimestamp="2025-10-02 11:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:42:28.927740087 +0000 UTC m=+868.835239550" watchObservedRunningTime="2025-10-02 11:42:28.933842393 +0000 UTC m=+868.841341856" Oct 02 11:42:33 crc kubenswrapper[4725]: I1002 11:42:33.944668 4725 generic.go:334] "Generic (PLEG): container finished" podID="177207e1-c514-4ece-ab43-249bf5253dd6" containerID="bb792ccbeff347ca3201880d7b1f9b6ed3c1c865ef38c2a4003c0b543a396618" exitCode=0 Oct 02 11:42:33 crc kubenswrapper[4725]: I1002 11:42:33.944720 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm5qd" event={"ID":"177207e1-c514-4ece-ab43-249bf5253dd6","Type":"ContainerDied","Data":"bb792ccbeff347ca3201880d7b1f9b6ed3c1c865ef38c2a4003c0b543a396618"} Oct 02 11:42:33 crc kubenswrapper[4725]: I1002 11:42:33.948663 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" event={"ID":"88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1","Type":"ContainerStarted","Data":"1d6b0f6932bd3a6a4b1482060f223e42a763a96247c64ce96a8b67c10c43bef8"} Oct 02 11:42:33 crc kubenswrapper[4725]: I1002 11:42:33.948832 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" Oct 02 11:42:33 crc kubenswrapper[4725]: I1002 11:42:33.984266 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" podStartSLOduration=2.189191584 podStartE2EDuration="8.984248773s" podCreationTimestamp="2025-10-02 11:42:25 +0000 UTC" firstStartedPulling="2025-10-02 11:42:26.280450587 +0000 UTC m=+866.187950050" lastFinishedPulling="2025-10-02 11:42:33.075507776 +0000 UTC m=+872.983007239" observedRunningTime="2025-10-02 11:42:33.981901254 +0000 UTC m=+873.889400747" watchObservedRunningTime="2025-10-02 11:42:33.984248773 +0000 UTC m=+873.891748236" Oct 02 11:42:34 crc kubenswrapper[4725]: I1002 11:42:34.957391 4725 generic.go:334] "Generic (PLEG): container finished" podID="177207e1-c514-4ece-ab43-249bf5253dd6" containerID="81c8ea481628ba0dba73655a6099d0bac77770ffc1194596bb82bbbddf358077" exitCode=0 Oct 02 11:42:34 crc kubenswrapper[4725]: I1002 11:42:34.957619 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm5qd" event={"ID":"177207e1-c514-4ece-ab43-249bf5253dd6","Type":"ContainerDied","Data":"81c8ea481628ba0dba73655a6099d0bac77770ffc1194596bb82bbbddf358077"} Oct 02 11:42:35 crc kubenswrapper[4725]: I1002 11:42:35.964208 4725 generic.go:334] "Generic (PLEG): container finished" podID="177207e1-c514-4ece-ab43-249bf5253dd6" containerID="98e1db5e1de424ff0e416475a84a57c467b2b34e2ea327d59387cf910c872d57" exitCode=0 Oct 02 11:42:35 crc kubenswrapper[4725]: I1002 11:42:35.964264 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm5qd" event={"ID":"177207e1-c514-4ece-ab43-249bf5253dd6","Type":"ContainerDied","Data":"98e1db5e1de424ff0e416475a84a57c467b2b34e2ea327d59387cf910c872d57"} Oct 02 11:42:36 crc kubenswrapper[4725]: I1002 11:42:36.980690 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm5qd" event={"ID":"177207e1-c514-4ece-ab43-249bf5253dd6","Type":"ContainerStarted","Data":"2b431ebb9ec52e6c6de57a0755ba034ca800f6aef87cd96b36d41a3af3c00815"} Oct 02 11:42:36 crc kubenswrapper[4725]: I1002 11:42:36.981056 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm5qd" event={"ID":"177207e1-c514-4ece-ab43-249bf5253dd6","Type":"ContainerStarted","Data":"d476d9bae3c1dde0645b0461c2b4813c1e46457ad6da048447398aa3a8abf231"} Oct 02 11:42:36 crc kubenswrapper[4725]: I1002 11:42:36.981072 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm5qd" event={"ID":"177207e1-c514-4ece-ab43-249bf5253dd6","Type":"ContainerStarted","Data":"7d763bb067178e71b97826fbbb9e51e7b424523b3e4bfe429f5dc72e416082fa"} Oct 02 11:42:36 crc kubenswrapper[4725]: I1002 11:42:36.981087 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm5qd" event={"ID":"177207e1-c514-4ece-ab43-249bf5253dd6","Type":"ContainerStarted","Data":"1b033a884c7beede9bfc22d0c13515518ea5f98c2fbcd491f9e5dc946fa52589"} Oct 02 11:42:36 crc kubenswrapper[4725]: I1002 11:42:36.981100 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm5qd" event={"ID":"177207e1-c514-4ece-ab43-249bf5253dd6","Type":"ContainerStarted","Data":"637f8bbda5f67f63c4f05c06f0761db3d2141d2a84457f8b50f95abe0e050401"} Oct 02 11:42:36 crc kubenswrapper[4725]: I1002 11:42:36.981111 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm5qd" event={"ID":"177207e1-c514-4ece-ab43-249bf5253dd6","Type":"ContainerStarted","Data":"59deed26e529f4ab9c00cca35795fd320692e54c075c31c9b020d2ac40b1304e"} Oct 02 11:42:36 crc kubenswrapper[4725]: I1002 11:42:36.981143 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:37 crc kubenswrapper[4725]: I1002 11:42:37.002982 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-lm5qd" podStartSLOduration=4.907930984 podStartE2EDuration="12.002966354s" podCreationTimestamp="2025-10-02 11:42:25 +0000 UTC" firstStartedPulling="2025-10-02 11:42:25.978403854 +0000 UTC m=+865.885903337" lastFinishedPulling="2025-10-02 11:42:33.073439244 +0000 UTC m=+872.980938707" observedRunningTime="2025-10-02 11:42:37.00007452 +0000 UTC m=+876.907574003" watchObservedRunningTime="2025-10-02 11:42:37.002966354 +0000 UTC m=+876.910465817" Oct 02 11:42:37 crc kubenswrapper[4725]: I1002 11:42:37.414266 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-l2ln9" Oct 02 11:42:40 crc kubenswrapper[4725]: I1002 11:42:40.847978 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:40 crc kubenswrapper[4725]: I1002 11:42:40.885509 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:42:43 crc kubenswrapper[4725]: I1002 11:42:43.866539 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xxgld"] Oct 02 11:42:43 crc kubenswrapper[4725]: I1002 11:42:43.869156 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xxgld" Oct 02 11:42:43 crc kubenswrapper[4725]: I1002 11:42:43.872027 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 02 11:42:43 crc kubenswrapper[4725]: I1002 11:42:43.872140 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5p24d" Oct 02 11:42:43 crc kubenswrapper[4725]: I1002 11:42:43.872430 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 02 11:42:43 crc kubenswrapper[4725]: I1002 11:42:43.880226 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xxgld"] Oct 02 11:42:43 crc kubenswrapper[4725]: I1002 11:42:43.932237 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc9kh\" (UniqueName: \"kubernetes.io/projected/b1d09d2a-fb84-40db-91ed-72875d001d9a-kube-api-access-qc9kh\") pod \"openstack-operator-index-xxgld\" (UID: \"b1d09d2a-fb84-40db-91ed-72875d001d9a\") " pod="openstack-operators/openstack-operator-index-xxgld" Oct 02 11:42:44 crc kubenswrapper[4725]: I1002 11:42:44.034467 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc9kh\" (UniqueName: \"kubernetes.io/projected/b1d09d2a-fb84-40db-91ed-72875d001d9a-kube-api-access-qc9kh\") pod \"openstack-operator-index-xxgld\" (UID: \"b1d09d2a-fb84-40db-91ed-72875d001d9a\") " pod="openstack-operators/openstack-operator-index-xxgld" Oct 02 11:42:44 crc kubenswrapper[4725]: I1002 11:42:44.058620 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc9kh\" (UniqueName: \"kubernetes.io/projected/b1d09d2a-fb84-40db-91ed-72875d001d9a-kube-api-access-qc9kh\") pod \"openstack-operator-index-xxgld\" (UID: \"b1d09d2a-fb84-40db-91ed-72875d001d9a\") " pod="openstack-operators/openstack-operator-index-xxgld" Oct 02 11:42:44 crc kubenswrapper[4725]: I1002 11:42:44.200651 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xxgld" Oct 02 11:42:44 crc kubenswrapper[4725]: I1002 11:42:44.420808 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xxgld"] Oct 02 11:42:45 crc kubenswrapper[4725]: I1002 11:42:45.030840 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xxgld" event={"ID":"b1d09d2a-fb84-40db-91ed-72875d001d9a","Type":"ContainerStarted","Data":"dac8bd9606c537c6cc97a5a24ee9467ef5a26c84ef523a2715f963eab48e3551"} Oct 02 11:42:45 crc kubenswrapper[4725]: I1002 11:42:45.844513 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-m8jtf" Oct 02 11:42:45 crc kubenswrapper[4725]: I1002 11:42:45.924007 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-lsjhw" Oct 02 11:42:47 crc kubenswrapper[4725]: I1002 11:42:47.043182 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xxgld" event={"ID":"b1d09d2a-fb84-40db-91ed-72875d001d9a","Type":"ContainerStarted","Data":"21bc23c3cfd90c2d70f1f2a776273ebcc3bd3ef1289f7dee6aeb468c37f419ec"} Oct 02 11:42:47 crc kubenswrapper[4725]: I1002 11:42:47.064664 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xxgld" podStartSLOduration=1.796341242 podStartE2EDuration="4.064644189s" podCreationTimestamp="2025-10-02 11:42:43 +0000 UTC" firstStartedPulling="2025-10-02 11:42:44.440306253 +0000 UTC m=+884.347805716" lastFinishedPulling="2025-10-02 11:42:46.7086092 +0000 UTC m=+886.616108663" observedRunningTime="2025-10-02 11:42:47.060540044 +0000 UTC m=+886.968039517" watchObservedRunningTime="2025-10-02 11:42:47.064644189 +0000 UTC m=+886.972143672" Oct 02 11:42:54 crc kubenswrapper[4725]: I1002 11:42:54.201362 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xxgld" Oct 02 11:42:54 crc kubenswrapper[4725]: I1002 11:42:54.201927 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xxgld" Oct 02 11:42:54 crc kubenswrapper[4725]: I1002 11:42:54.228580 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xxgld" Oct 02 11:42:55 crc kubenswrapper[4725]: I1002 11:42:55.124266 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xxgld" Oct 02 11:42:55 crc kubenswrapper[4725]: I1002 11:42:55.851114 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lm5qd" Oct 02 11:43:01 crc kubenswrapper[4725]: I1002 11:43:01.945174 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z"] Oct 02 11:43:01 crc kubenswrapper[4725]: I1002 11:43:01.947411 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:01 crc kubenswrapper[4725]: I1002 11:43:01.949835 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-kbzzd" Oct 02 11:43:01 crc kubenswrapper[4725]: I1002 11:43:01.959781 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z"] Oct 02 11:43:01 crc kubenswrapper[4725]: I1002 11:43:01.983866 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-bundle\") pod \"6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:01 crc kubenswrapper[4725]: I1002 11:43:01.984266 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsbr4\" (UniqueName: \"kubernetes.io/projected/ae4a4227-820c-48fd-a32d-7e62caaa222b-kube-api-access-nsbr4\") pod \"6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:01 crc kubenswrapper[4725]: I1002 11:43:01.984525 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-util\") pod \"6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:02 crc kubenswrapper[4725]: I1002 11:43:02.086453 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-bundle\") pod \"6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:02 crc kubenswrapper[4725]: I1002 11:43:02.086512 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsbr4\" (UniqueName: \"kubernetes.io/projected/ae4a4227-820c-48fd-a32d-7e62caaa222b-kube-api-access-nsbr4\") pod \"6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:02 crc kubenswrapper[4725]: I1002 11:43:02.086562 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-util\") pod \"6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:02 crc kubenswrapper[4725]: I1002 11:43:02.086966 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-util\") pod \"6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:02 crc kubenswrapper[4725]: I1002 11:43:02.087455 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-bundle\") pod \"6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:02 crc kubenswrapper[4725]: I1002 11:43:02.109209 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsbr4\" (UniqueName: \"kubernetes.io/projected/ae4a4227-820c-48fd-a32d-7e62caaa222b-kube-api-access-nsbr4\") pod \"6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:02 crc kubenswrapper[4725]: I1002 11:43:02.279286 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:02 crc kubenswrapper[4725]: I1002 11:43:02.703036 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z"] Oct 02 11:43:02 crc kubenswrapper[4725]: W1002 11:43:02.710672 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae4a4227_820c_48fd_a32d_7e62caaa222b.slice/crio-f72750c8592474861654f6a508cea52059c1e925aea6590a845c84d79abd7b9c WatchSource:0}: Error finding container f72750c8592474861654f6a508cea52059c1e925aea6590a845c84d79abd7b9c: Status 404 returned error can't find the container with id f72750c8592474861654f6a508cea52059c1e925aea6590a845c84d79abd7b9c Oct 02 11:43:03 crc kubenswrapper[4725]: I1002 11:43:03.150397 4725 generic.go:334] "Generic (PLEG): container finished" podID="ae4a4227-820c-48fd-a32d-7e62caaa222b" containerID="ee33def22eceb8f8d35e1bb62ba5e05ba306b6c64ceeb1b587844ca715132527" exitCode=0 Oct 02 11:43:03 crc kubenswrapper[4725]: I1002 11:43:03.150478 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" event={"ID":"ae4a4227-820c-48fd-a32d-7e62caaa222b","Type":"ContainerDied","Data":"ee33def22eceb8f8d35e1bb62ba5e05ba306b6c64ceeb1b587844ca715132527"} Oct 02 11:43:03 crc kubenswrapper[4725]: I1002 11:43:03.150716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" event={"ID":"ae4a4227-820c-48fd-a32d-7e62caaa222b","Type":"ContainerStarted","Data":"f72750c8592474861654f6a508cea52059c1e925aea6590a845c84d79abd7b9c"} Oct 02 11:43:04 crc kubenswrapper[4725]: I1002 11:43:04.158821 4725 generic.go:334] "Generic (PLEG): container finished" podID="ae4a4227-820c-48fd-a32d-7e62caaa222b" containerID="53602e64b9dea3edc252e26ccf2d29147ae3884d45cfa36b5c93e0ccb1085f39" exitCode=0 Oct 02 11:43:04 crc kubenswrapper[4725]: I1002 11:43:04.158876 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" event={"ID":"ae4a4227-820c-48fd-a32d-7e62caaa222b","Type":"ContainerDied","Data":"53602e64b9dea3edc252e26ccf2d29147ae3884d45cfa36b5c93e0ccb1085f39"} Oct 02 11:43:05 crc kubenswrapper[4725]: I1002 11:43:05.171349 4725 generic.go:334] "Generic (PLEG): container finished" podID="ae4a4227-820c-48fd-a32d-7e62caaa222b" containerID="5aa9b0586904870ff473310932ae97f24d97131ace1c91045c8fd2c658196254" exitCode=0 Oct 02 11:43:05 crc kubenswrapper[4725]: I1002 11:43:05.171573 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" event={"ID":"ae4a4227-820c-48fd-a32d-7e62caaa222b","Type":"ContainerDied","Data":"5aa9b0586904870ff473310932ae97f24d97131ace1c91045c8fd2c658196254"} Oct 02 11:43:06 crc kubenswrapper[4725]: I1002 11:43:06.442226 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:06 crc kubenswrapper[4725]: I1002 11:43:06.553655 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-util\") pod \"ae4a4227-820c-48fd-a32d-7e62caaa222b\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " Oct 02 11:43:06 crc kubenswrapper[4725]: I1002 11:43:06.553738 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsbr4\" (UniqueName: \"kubernetes.io/projected/ae4a4227-820c-48fd-a32d-7e62caaa222b-kube-api-access-nsbr4\") pod \"ae4a4227-820c-48fd-a32d-7e62caaa222b\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " Oct 02 11:43:06 crc kubenswrapper[4725]: I1002 11:43:06.553902 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-bundle\") pod \"ae4a4227-820c-48fd-a32d-7e62caaa222b\" (UID: \"ae4a4227-820c-48fd-a32d-7e62caaa222b\") " Oct 02 11:43:06 crc kubenswrapper[4725]: I1002 11:43:06.554768 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-bundle" (OuterVolumeSpecName: "bundle") pod "ae4a4227-820c-48fd-a32d-7e62caaa222b" (UID: "ae4a4227-820c-48fd-a32d-7e62caaa222b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:43:06 crc kubenswrapper[4725]: I1002 11:43:06.559771 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4a4227-820c-48fd-a32d-7e62caaa222b-kube-api-access-nsbr4" (OuterVolumeSpecName: "kube-api-access-nsbr4") pod "ae4a4227-820c-48fd-a32d-7e62caaa222b" (UID: "ae4a4227-820c-48fd-a32d-7e62caaa222b"). InnerVolumeSpecName "kube-api-access-nsbr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:43:06 crc kubenswrapper[4725]: I1002 11:43:06.568731 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-util" (OuterVolumeSpecName: "util") pod "ae4a4227-820c-48fd-a32d-7e62caaa222b" (UID: "ae4a4227-820c-48fd-a32d-7e62caaa222b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:43:06 crc kubenswrapper[4725]: I1002 11:43:06.655414 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:43:06 crc kubenswrapper[4725]: I1002 11:43:06.655454 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae4a4227-820c-48fd-a32d-7e62caaa222b-util\") on node \"crc\" DevicePath \"\"" Oct 02 11:43:06 crc kubenswrapper[4725]: I1002 11:43:06.655466 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsbr4\" (UniqueName: \"kubernetes.io/projected/ae4a4227-820c-48fd-a32d-7e62caaa222b-kube-api-access-nsbr4\") on node \"crc\" DevicePath \"\"" Oct 02 11:43:07 crc kubenswrapper[4725]: I1002 11:43:07.190089 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" event={"ID":"ae4a4227-820c-48fd-a32d-7e62caaa222b","Type":"ContainerDied","Data":"f72750c8592474861654f6a508cea52059c1e925aea6590a845c84d79abd7b9c"} Oct 02 11:43:07 crc kubenswrapper[4725]: I1002 11:43:07.190156 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f72750c8592474861654f6a508cea52059c1e925aea6590a845c84d79abd7b9c" Oct 02 11:43:07 crc kubenswrapper[4725]: I1002 11:43:07.190179 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.120518 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb"] Oct 02 11:43:10 crc kubenswrapper[4725]: E1002 11:43:10.121359 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae4a4227-820c-48fd-a32d-7e62caaa222b" containerName="extract" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.121376 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae4a4227-820c-48fd-a32d-7e62caaa222b" containerName="extract" Oct 02 11:43:10 crc kubenswrapper[4725]: E1002 11:43:10.121388 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae4a4227-820c-48fd-a32d-7e62caaa222b" containerName="pull" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.121396 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae4a4227-820c-48fd-a32d-7e62caaa222b" containerName="pull" Oct 02 11:43:10 crc kubenswrapper[4725]: E1002 11:43:10.121438 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae4a4227-820c-48fd-a32d-7e62caaa222b" containerName="util" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.121445 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae4a4227-820c-48fd-a32d-7e62caaa222b" containerName="util" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.121588 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae4a4227-820c-48fd-a32d-7e62caaa222b" containerName="extract" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.122328 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.124567 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-v8lsw" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.154903 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb"] Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.201363 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sspcl\" (UniqueName: \"kubernetes.io/projected/34d68cf3-a46e-4588-abff-0487fe2ceacc-kube-api-access-sspcl\") pod \"openstack-operator-controller-operator-859f658b7-xk7wb\" (UID: \"34d68cf3-a46e-4588-abff-0487fe2ceacc\") " pod="openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.302940 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sspcl\" (UniqueName: \"kubernetes.io/projected/34d68cf3-a46e-4588-abff-0487fe2ceacc-kube-api-access-sspcl\") pod \"openstack-operator-controller-operator-859f658b7-xk7wb\" (UID: \"34d68cf3-a46e-4588-abff-0487fe2ceacc\") " pod="openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.327329 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sspcl\" (UniqueName: \"kubernetes.io/projected/34d68cf3-a46e-4588-abff-0487fe2ceacc-kube-api-access-sspcl\") pod \"openstack-operator-controller-operator-859f658b7-xk7wb\" (UID: \"34d68cf3-a46e-4588-abff-0487fe2ceacc\") " pod="openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.439830 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb" Oct 02 11:43:10 crc kubenswrapper[4725]: I1002 11:43:10.921418 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb"] Oct 02 11:43:10 crc kubenswrapper[4725]: W1002 11:43:10.941205 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34d68cf3_a46e_4588_abff_0487fe2ceacc.slice/crio-e935d2f28553bb0c1b23f838caae2707b796f891224e36e99046bf7d44194e4d WatchSource:0}: Error finding container e935d2f28553bb0c1b23f838caae2707b796f891224e36e99046bf7d44194e4d: Status 404 returned error can't find the container with id e935d2f28553bb0c1b23f838caae2707b796f891224e36e99046bf7d44194e4d Oct 02 11:43:11 crc kubenswrapper[4725]: I1002 11:43:11.215473 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb" event={"ID":"34d68cf3-a46e-4588-abff-0487fe2ceacc","Type":"ContainerStarted","Data":"e935d2f28553bb0c1b23f838caae2707b796f891224e36e99046bf7d44194e4d"} Oct 02 11:43:13 crc kubenswrapper[4725]: I1002 11:43:13.915052 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mn65f"] Oct 02 11:43:13 crc kubenswrapper[4725]: I1002 11:43:13.918894 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:13 crc kubenswrapper[4725]: I1002 11:43:13.921324 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mn65f"] Oct 02 11:43:13 crc kubenswrapper[4725]: I1002 11:43:13.949034 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-utilities\") pod \"community-operators-mn65f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:13 crc kubenswrapper[4725]: I1002 11:43:13.949078 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2npgk\" (UniqueName: \"kubernetes.io/projected/d2f56ef0-9d49-48c4-8083-70ea8a67268f-kube-api-access-2npgk\") pod \"community-operators-mn65f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:13 crc kubenswrapper[4725]: I1002 11:43:13.949100 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-catalog-content\") pod \"community-operators-mn65f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:14 crc kubenswrapper[4725]: I1002 11:43:14.050945 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-utilities\") pod \"community-operators-mn65f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:14 crc kubenswrapper[4725]: I1002 11:43:14.051015 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2npgk\" (UniqueName: \"kubernetes.io/projected/d2f56ef0-9d49-48c4-8083-70ea8a67268f-kube-api-access-2npgk\") pod \"community-operators-mn65f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:14 crc kubenswrapper[4725]: I1002 11:43:14.051050 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-catalog-content\") pod \"community-operators-mn65f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:14 crc kubenswrapper[4725]: I1002 11:43:14.051664 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-catalog-content\") pod \"community-operators-mn65f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:14 crc kubenswrapper[4725]: I1002 11:43:14.052379 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-utilities\") pod \"community-operators-mn65f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:14 crc kubenswrapper[4725]: I1002 11:43:14.073793 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2npgk\" (UniqueName: \"kubernetes.io/projected/d2f56ef0-9d49-48c4-8083-70ea8a67268f-kube-api-access-2npgk\") pod \"community-operators-mn65f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:14 crc kubenswrapper[4725]: I1002 11:43:14.241906 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:14 crc kubenswrapper[4725]: I1002 11:43:14.955433 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mn65f"] Oct 02 11:43:14 crc kubenswrapper[4725]: W1002 11:43:14.959216 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2f56ef0_9d49_48c4_8083_70ea8a67268f.slice/crio-6c24c8e79cfd3362694bd85f6f0d5e2ab76157059e3e71e847627333eaddb48a WatchSource:0}: Error finding container 6c24c8e79cfd3362694bd85f6f0d5e2ab76157059e3e71e847627333eaddb48a: Status 404 returned error can't find the container with id 6c24c8e79cfd3362694bd85f6f0d5e2ab76157059e3e71e847627333eaddb48a Oct 02 11:43:15 crc kubenswrapper[4725]: I1002 11:43:15.244400 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb" event={"ID":"34d68cf3-a46e-4588-abff-0487fe2ceacc","Type":"ContainerStarted","Data":"d50bbd0e3e1adfc05539e825d73abc0240c6018adbd9aff80b001a82287de355"} Oct 02 11:43:15 crc kubenswrapper[4725]: I1002 11:43:15.246192 4725 generic.go:334] "Generic (PLEG): container finished" podID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" containerID="dc8c26eb71fc503f4dd5284e8e8702a003089ffe3339756f1bc5ca2203ee93dd" exitCode=0 Oct 02 11:43:15 crc kubenswrapper[4725]: I1002 11:43:15.246223 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn65f" event={"ID":"d2f56ef0-9d49-48c4-8083-70ea8a67268f","Type":"ContainerDied","Data":"dc8c26eb71fc503f4dd5284e8e8702a003089ffe3339756f1bc5ca2203ee93dd"} Oct 02 11:43:15 crc kubenswrapper[4725]: I1002 11:43:15.246241 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn65f" event={"ID":"d2f56ef0-9d49-48c4-8083-70ea8a67268f","Type":"ContainerStarted","Data":"6c24c8e79cfd3362694bd85f6f0d5e2ab76157059e3e71e847627333eaddb48a"} Oct 02 11:43:17 crc kubenswrapper[4725]: I1002 11:43:17.259039 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb" event={"ID":"34d68cf3-a46e-4588-abff-0487fe2ceacc","Type":"ContainerStarted","Data":"4dbab9496018fe15eb724399056246c14175dc2304369beb5c40485c23f4bd68"} Oct 02 11:43:17 crc kubenswrapper[4725]: I1002 11:43:17.259350 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb" Oct 02 11:43:17 crc kubenswrapper[4725]: I1002 11:43:17.260600 4725 generic.go:334] "Generic (PLEG): container finished" podID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" containerID="01bacb5f9bfdb0d7429d7d143d7f9429a93ca227441bfaeec03dcf5dd091ae1d" exitCode=0 Oct 02 11:43:17 crc kubenswrapper[4725]: I1002 11:43:17.260630 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn65f" event={"ID":"d2f56ef0-9d49-48c4-8083-70ea8a67268f","Type":"ContainerDied","Data":"01bacb5f9bfdb0d7429d7d143d7f9429a93ca227441bfaeec03dcf5dd091ae1d"} Oct 02 11:43:17 crc kubenswrapper[4725]: I1002 11:43:17.292973 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb" podStartSLOduration=1.302980971 podStartE2EDuration="7.292953266s" podCreationTimestamp="2025-10-02 11:43:10 +0000 UTC" firstStartedPulling="2025-10-02 11:43:10.944179806 +0000 UTC m=+910.851679279" lastFinishedPulling="2025-10-02 11:43:16.934152081 +0000 UTC m=+916.841651574" observedRunningTime="2025-10-02 11:43:17.2889191 +0000 UTC m=+917.196418563" watchObservedRunningTime="2025-10-02 11:43:17.292953266 +0000 UTC m=+917.200452729" Oct 02 11:43:18 crc kubenswrapper[4725]: I1002 11:43:18.268053 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn65f" event={"ID":"d2f56ef0-9d49-48c4-8083-70ea8a67268f","Type":"ContainerStarted","Data":"9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc"} Oct 02 11:43:18 crc kubenswrapper[4725]: I1002 11:43:18.284321 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mn65f" podStartSLOduration=2.838664887 podStartE2EDuration="5.284301131s" podCreationTimestamp="2025-10-02 11:43:13 +0000 UTC" firstStartedPulling="2025-10-02 11:43:15.247628565 +0000 UTC m=+915.155128028" lastFinishedPulling="2025-10-02 11:43:17.693264809 +0000 UTC m=+917.600764272" observedRunningTime="2025-10-02 11:43:18.283212922 +0000 UTC m=+918.190712375" watchObservedRunningTime="2025-10-02 11:43:18.284301131 +0000 UTC m=+918.191800594" Oct 02 11:43:20 crc kubenswrapper[4725]: I1002 11:43:20.443114 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-859f658b7-xk7wb" Oct 02 11:43:24 crc kubenswrapper[4725]: I1002 11:43:24.242977 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:24 crc kubenswrapper[4725]: I1002 11:43:24.243587 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:24 crc kubenswrapper[4725]: I1002 11:43:24.280890 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:24 crc kubenswrapper[4725]: I1002 11:43:24.339153 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:24 crc kubenswrapper[4725]: I1002 11:43:24.530192 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mn65f"] Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.312107 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mn65f" podUID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" containerName="registry-server" containerID="cri-o://9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc" gracePeriod=2 Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.706771 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.814109 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-utilities\") pod \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.814161 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2npgk\" (UniqueName: \"kubernetes.io/projected/d2f56ef0-9d49-48c4-8083-70ea8a67268f-kube-api-access-2npgk\") pod \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.814180 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-catalog-content\") pod \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\" (UID: \"d2f56ef0-9d49-48c4-8083-70ea8a67268f\") " Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.814946 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-utilities" (OuterVolumeSpecName: "utilities") pod "d2f56ef0-9d49-48c4-8083-70ea8a67268f" (UID: "d2f56ef0-9d49-48c4-8083-70ea8a67268f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.834921 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f56ef0-9d49-48c4-8083-70ea8a67268f-kube-api-access-2npgk" (OuterVolumeSpecName: "kube-api-access-2npgk") pod "d2f56ef0-9d49-48c4-8083-70ea8a67268f" (UID: "d2f56ef0-9d49-48c4-8083-70ea8a67268f"). InnerVolumeSpecName "kube-api-access-2npgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.882033 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2f56ef0-9d49-48c4-8083-70ea8a67268f" (UID: "d2f56ef0-9d49-48c4-8083-70ea8a67268f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.916146 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.916180 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2npgk\" (UniqueName: \"kubernetes.io/projected/d2f56ef0-9d49-48c4-8083-70ea8a67268f-kube-api-access-2npgk\") on node \"crc\" DevicePath \"\"" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.916190 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f56ef0-9d49-48c4-8083-70ea8a67268f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.933033 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4dths"] Oct 02 11:43:26 crc kubenswrapper[4725]: E1002 11:43:26.933241 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" containerName="extract-utilities" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.933252 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" containerName="extract-utilities" Oct 02 11:43:26 crc kubenswrapper[4725]: E1002 11:43:26.933264 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" containerName="registry-server" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.933270 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" containerName="registry-server" Oct 02 11:43:26 crc kubenswrapper[4725]: E1002 11:43:26.933283 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" containerName="extract-content" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.933289 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" containerName="extract-content" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.933388 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" containerName="registry-server" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.934163 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:26 crc kubenswrapper[4725]: I1002 11:43:26.949644 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dths"] Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.118391 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kbf5\" (UniqueName: \"kubernetes.io/projected/079de0fb-5d10-4dae-8b54-ce1169f08123-kube-api-access-4kbf5\") pod \"redhat-marketplace-4dths\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.118747 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-catalog-content\") pod \"redhat-marketplace-4dths\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.118992 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-utilities\") pod \"redhat-marketplace-4dths\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.219504 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-utilities\") pod \"redhat-marketplace-4dths\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.219559 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kbf5\" (UniqueName: \"kubernetes.io/projected/079de0fb-5d10-4dae-8b54-ce1169f08123-kube-api-access-4kbf5\") pod \"redhat-marketplace-4dths\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.219601 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-catalog-content\") pod \"redhat-marketplace-4dths\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.220043 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-utilities\") pod \"redhat-marketplace-4dths\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.220071 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-catalog-content\") pod \"redhat-marketplace-4dths\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.250141 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kbf5\" (UniqueName: \"kubernetes.io/projected/079de0fb-5d10-4dae-8b54-ce1169f08123-kube-api-access-4kbf5\") pod \"redhat-marketplace-4dths\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.318807 4725 generic.go:334] "Generic (PLEG): container finished" podID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" containerID="9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc" exitCode=0 Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.318867 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mn65f" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.318860 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn65f" event={"ID":"d2f56ef0-9d49-48c4-8083-70ea8a67268f","Type":"ContainerDied","Data":"9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc"} Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.319000 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn65f" event={"ID":"d2f56ef0-9d49-48c4-8083-70ea8a67268f","Type":"ContainerDied","Data":"6c24c8e79cfd3362694bd85f6f0d5e2ab76157059e3e71e847627333eaddb48a"} Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.319022 4725 scope.go:117] "RemoveContainer" containerID="9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.342995 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mn65f"] Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.344876 4725 scope.go:117] "RemoveContainer" containerID="01bacb5f9bfdb0d7429d7d143d7f9429a93ca227441bfaeec03dcf5dd091ae1d" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.346815 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mn65f"] Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.361649 4725 scope.go:117] "RemoveContainer" containerID="dc8c26eb71fc503f4dd5284e8e8702a003089ffe3339756f1bc5ca2203ee93dd" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.380669 4725 scope.go:117] "RemoveContainer" containerID="9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc" Oct 02 11:43:27 crc kubenswrapper[4725]: E1002 11:43:27.381221 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc\": container with ID starting with 9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc not found: ID does not exist" containerID="9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.381250 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc"} err="failed to get container status \"9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc\": rpc error: code = NotFound desc = could not find container \"9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc\": container with ID starting with 9c3bf76a72b5f316b0f3b0b99cd78f79cfcc8ae12551b7b011fd19333f6fdcfc not found: ID does not exist" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.381270 4725 scope.go:117] "RemoveContainer" containerID="01bacb5f9bfdb0d7429d7d143d7f9429a93ca227441bfaeec03dcf5dd091ae1d" Oct 02 11:43:27 crc kubenswrapper[4725]: E1002 11:43:27.381618 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01bacb5f9bfdb0d7429d7d143d7f9429a93ca227441bfaeec03dcf5dd091ae1d\": container with ID starting with 01bacb5f9bfdb0d7429d7d143d7f9429a93ca227441bfaeec03dcf5dd091ae1d not found: ID does not exist" containerID="01bacb5f9bfdb0d7429d7d143d7f9429a93ca227441bfaeec03dcf5dd091ae1d" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.381656 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01bacb5f9bfdb0d7429d7d143d7f9429a93ca227441bfaeec03dcf5dd091ae1d"} err="failed to get container status \"01bacb5f9bfdb0d7429d7d143d7f9429a93ca227441bfaeec03dcf5dd091ae1d\": rpc error: code = NotFound desc = could not find container \"01bacb5f9bfdb0d7429d7d143d7f9429a93ca227441bfaeec03dcf5dd091ae1d\": container with ID starting with 01bacb5f9bfdb0d7429d7d143d7f9429a93ca227441bfaeec03dcf5dd091ae1d not found: ID does not exist" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.381693 4725 scope.go:117] "RemoveContainer" containerID="dc8c26eb71fc503f4dd5284e8e8702a003089ffe3339756f1bc5ca2203ee93dd" Oct 02 11:43:27 crc kubenswrapper[4725]: E1002 11:43:27.382254 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc8c26eb71fc503f4dd5284e8e8702a003089ffe3339756f1bc5ca2203ee93dd\": container with ID starting with dc8c26eb71fc503f4dd5284e8e8702a003089ffe3339756f1bc5ca2203ee93dd not found: ID does not exist" containerID="dc8c26eb71fc503f4dd5284e8e8702a003089ffe3339756f1bc5ca2203ee93dd" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.382302 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc8c26eb71fc503f4dd5284e8e8702a003089ffe3339756f1bc5ca2203ee93dd"} err="failed to get container status \"dc8c26eb71fc503f4dd5284e8e8702a003089ffe3339756f1bc5ca2203ee93dd\": rpc error: code = NotFound desc = could not find container \"dc8c26eb71fc503f4dd5284e8e8702a003089ffe3339756f1bc5ca2203ee93dd\": container with ID starting with dc8c26eb71fc503f4dd5284e8e8702a003089ffe3339756f1bc5ca2203ee93dd not found: ID does not exist" Oct 02 11:43:27 crc kubenswrapper[4725]: I1002 11:43:27.545507 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:28 crc kubenswrapper[4725]: I1002 11:43:28.092003 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dths"] Oct 02 11:43:28 crc kubenswrapper[4725]: W1002 11:43:28.121119 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod079de0fb_5d10_4dae_8b54_ce1169f08123.slice/crio-d374dacfb55177ca4580ca026418a0bed766f05b04ae14c9981dd1cce98e5819 WatchSource:0}: Error finding container d374dacfb55177ca4580ca026418a0bed766f05b04ae14c9981dd1cce98e5819: Status 404 returned error can't find the container with id d374dacfb55177ca4580ca026418a0bed766f05b04ae14c9981dd1cce98e5819 Oct 02 11:43:28 crc kubenswrapper[4725]: I1002 11:43:28.325127 4725 generic.go:334] "Generic (PLEG): container finished" podID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerID="84f392ec1f639b5456e647cc352a7d3af2e9e5eccc4aae03ac0a0bad105ffb31" exitCode=0 Oct 02 11:43:28 crc kubenswrapper[4725]: I1002 11:43:28.325193 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dths" event={"ID":"079de0fb-5d10-4dae-8b54-ce1169f08123","Type":"ContainerDied","Data":"84f392ec1f639b5456e647cc352a7d3af2e9e5eccc4aae03ac0a0bad105ffb31"} Oct 02 11:43:28 crc kubenswrapper[4725]: I1002 11:43:28.325220 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dths" event={"ID":"079de0fb-5d10-4dae-8b54-ce1169f08123","Type":"ContainerStarted","Data":"d374dacfb55177ca4580ca026418a0bed766f05b04ae14c9981dd1cce98e5819"} Oct 02 11:43:29 crc kubenswrapper[4725]: I1002 11:43:29.277314 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f56ef0-9d49-48c4-8083-70ea8a67268f" path="/var/lib/kubelet/pods/d2f56ef0-9d49-48c4-8083-70ea8a67268f/volumes" Oct 02 11:43:29 crc kubenswrapper[4725]: I1002 11:43:29.336103 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dths" event={"ID":"079de0fb-5d10-4dae-8b54-ce1169f08123","Type":"ContainerStarted","Data":"6b182432556f31dd10a2334ed1273df0498ed74ba332e7e728c59ec5452fedcd"} Oct 02 11:43:30 crc kubenswrapper[4725]: I1002 11:43:30.343873 4725 generic.go:334] "Generic (PLEG): container finished" podID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerID="6b182432556f31dd10a2334ed1273df0498ed74ba332e7e728c59ec5452fedcd" exitCode=0 Oct 02 11:43:30 crc kubenswrapper[4725]: I1002 11:43:30.343919 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dths" event={"ID":"079de0fb-5d10-4dae-8b54-ce1169f08123","Type":"ContainerDied","Data":"6b182432556f31dd10a2334ed1273df0498ed74ba332e7e728c59ec5452fedcd"} Oct 02 11:43:31 crc kubenswrapper[4725]: I1002 11:43:31.351924 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dths" event={"ID":"079de0fb-5d10-4dae-8b54-ce1169f08123","Type":"ContainerStarted","Data":"1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9"} Oct 02 11:43:31 crc kubenswrapper[4725]: I1002 11:43:31.371461 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4dths" podStartSLOduration=2.675613761 podStartE2EDuration="5.371438713s" podCreationTimestamp="2025-10-02 11:43:26 +0000 UTC" firstStartedPulling="2025-10-02 11:43:28.326646814 +0000 UTC m=+928.234146277" lastFinishedPulling="2025-10-02 11:43:31.022471766 +0000 UTC m=+930.929971229" observedRunningTime="2025-10-02 11:43:31.368389443 +0000 UTC m=+931.275888926" watchObservedRunningTime="2025-10-02 11:43:31.371438713 +0000 UTC m=+931.278938196" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.547161 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.548552 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.589345 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.606996 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.621017 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.631649 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-czz6n" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.633718 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.634985 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.637797 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.638415 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-nq6vp" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.657918 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.666815 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.667777 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.670231 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-28cm9" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.680606 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.683470 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.687047 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.689776 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.690585 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kbfmr" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.690917 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.696472 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-fqjgl" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.728738 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.738672 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.739887 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.742038 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qgv2v" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.751442 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jschf\" (UniqueName: \"kubernetes.io/projected/a51da7c1-9136-40c8-851a-f7c2d1f7a644-kube-api-access-jschf\") pod \"barbican-operator-controller-manager-6ff8b75857-n5pzn\" (UID: \"a51da7c1-9136-40c8-851a-f7c2d1f7a644\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.751524 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24s5f\" (UniqueName: \"kubernetes.io/projected/2a1bf314-ad40-4055-8373-b05888c06791-kube-api-access-24s5f\") pod \"cinder-operator-controller-manager-644bddb6d8-6hxv6\" (UID: \"2a1bf314-ad40-4055-8373-b05888c06791\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.752412 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.753870 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.767317 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-glnwc" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.767605 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.768541 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.779687 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.781836 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.786968 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-fv2bn" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.787915 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.806278 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.807260 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.812631 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x696g" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.824121 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.833555 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.837466 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.838607 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.840447 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-lzwk4" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.856130 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rbw\" (UniqueName: \"kubernetes.io/projected/023a7a0e-9279-4b9b-ba5d-6cd41b2aa729-kube-api-access-p8rbw\") pod \"manila-operator-controller-manager-6d68dbc695-zcsfx\" (UID: \"023a7a0e-9279-4b9b-ba5d-6cd41b2aa729\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.856254 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24s5f\" (UniqueName: \"kubernetes.io/projected/2a1bf314-ad40-4055-8373-b05888c06791-kube-api-access-24s5f\") pod \"cinder-operator-controller-manager-644bddb6d8-6hxv6\" (UID: \"2a1bf314-ad40-4055-8373-b05888c06791\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.856345 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwshs\" (UniqueName: \"kubernetes.io/projected/e66fa8da-eabe-4fe6-8689-961c09641552-kube-api-access-bwshs\") pod \"heat-operator-controller-manager-5d889d78cf-cv4rz\" (UID: \"e66fa8da-eabe-4fe6-8689-961c09641552\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.856483 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jschf\" (UniqueName: \"kubernetes.io/projected/a51da7c1-9136-40c8-851a-f7c2d1f7a644-kube-api-access-jschf\") pod \"barbican-operator-controller-manager-6ff8b75857-n5pzn\" (UID: \"a51da7c1-9136-40c8-851a-f7c2d1f7a644\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.856564 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvk7n\" (UniqueName: \"kubernetes.io/projected/827de292-bc8c-40da-be5f-443d06e48782-kube-api-access-gvk7n\") pod \"glance-operator-controller-manager-84958c4d49-l56v9\" (UID: \"827de292-bc8c-40da-be5f-443d06e48782\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.856676 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7lrz\" (UniqueName: \"kubernetes.io/projected/dfe403d1-c0bb-4570-8b27-714c65d930af-kube-api-access-w7lrz\") pod \"horizon-operator-controller-manager-9f4696d94-sr7pb\" (UID: \"dfe403d1-c0bb-4570-8b27-714c65d930af\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.856789 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7j99\" (UniqueName: \"kubernetes.io/projected/7306cbd5-07f3-48a7-a865-752417bf2e8e-kube-api-access-l7j99\") pod \"designate-operator-controller-manager-84f4f7b77b-rnbs8\" (UID: \"7306cbd5-07f3-48a7-a865-752417bf2e8e\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.874847 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.878930 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.882687 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jschf\" (UniqueName: \"kubernetes.io/projected/a51da7c1-9136-40c8-851a-f7c2d1f7a644-kube-api-access-jschf\") pod \"barbican-operator-controller-manager-6ff8b75857-n5pzn\" (UID: \"a51da7c1-9136-40c8-851a-f7c2d1f7a644\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.891598 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24s5f\" (UniqueName: \"kubernetes.io/projected/2a1bf314-ad40-4055-8373-b05888c06791-kube-api-access-24s5f\") pod \"cinder-operator-controller-manager-644bddb6d8-6hxv6\" (UID: \"2a1bf314-ad40-4055-8373-b05888c06791\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.896899 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-d7btl"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.897911 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-88c7-d7btl" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.899833 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-knmhs" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.908017 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.909107 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-d7btl"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.909265 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.911431 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-tlp6x" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.924984 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.930800 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.931815 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.935512 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-jz726" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.936673 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.937656 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.940602 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.940921 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jltxd" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.946024 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.953789 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.958179 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvk7n\" (UniqueName: \"kubernetes.io/projected/827de292-bc8c-40da-be5f-443d06e48782-kube-api-access-gvk7n\") pod \"glance-operator-controller-manager-84958c4d49-l56v9\" (UID: \"827de292-bc8c-40da-be5f-443d06e48782\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.966890 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.976090 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7lrz\" (UniqueName: \"kubernetes.io/projected/dfe403d1-c0bb-4570-8b27-714c65d930af-kube-api-access-w7lrz\") pod \"horizon-operator-controller-manager-9f4696d94-sr7pb\" (UID: \"dfe403d1-c0bb-4570-8b27-714c65d930af\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.993130 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7j99\" (UniqueName: \"kubernetes.io/projected/7306cbd5-07f3-48a7-a865-752417bf2e8e-kube-api-access-l7j99\") pod \"designate-operator-controller-manager-84f4f7b77b-rnbs8\" (UID: \"7306cbd5-07f3-48a7-a865-752417bf2e8e\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.993214 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rbw\" (UniqueName: \"kubernetes.io/projected/023a7a0e-9279-4b9b-ba5d-6cd41b2aa729-kube-api-access-p8rbw\") pod \"manila-operator-controller-manager-6d68dbc695-zcsfx\" (UID: \"023a7a0e-9279-4b9b-ba5d-6cd41b2aa729\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.984346 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx"] Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.993259 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrw5\" (UniqueName: \"kubernetes.io/projected/57843ab0-f141-436e-847c-71f339bb736b-kube-api-access-pnrw5\") pod \"ironic-operator-controller-manager-5cd4858477-hfz8s\" (UID: \"57843ab0-f141-436e-847c-71f339bb736b\") " pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.993302 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9np7k\" (UniqueName: \"kubernetes.io/projected/9d557980-a1fc-4123-9a45-351264ad1fbc-kube-api-access-9np7k\") pod \"keystone-operator-controller-manager-5bd55b4bff-bbfd9\" (UID: \"9d557980-a1fc-4123-9a45-351264ad1fbc\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.993341 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwshs\" (UniqueName: \"kubernetes.io/projected/e66fa8da-eabe-4fe6-8689-961c09641552-kube-api-access-bwshs\") pod \"heat-operator-controller-manager-5d889d78cf-cv4rz\" (UID: \"e66fa8da-eabe-4fe6-8689-961c09641552\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.993413 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slwqc\" (UniqueName: \"kubernetes.io/projected/2d4f9b95-e805-4def-bd1c-35b262ebd01f-kube-api-access-slwqc\") pod \"infra-operator-controller-manager-9d6c5db85-nzp5n\" (UID: \"2d4f9b95-e805-4def-bd1c-35b262ebd01f\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" Oct 02 11:43:37 crc kubenswrapper[4725]: I1002 11:43:37.993454 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d4f9b95-e805-4def-bd1c-35b262ebd01f-cert\") pod \"infra-operator-controller-manager-9d6c5db85-nzp5n\" (UID: \"2d4f9b95-e805-4def-bd1c-35b262ebd01f\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:37.999805 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.002582 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.005157 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.020837 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.049309 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.051949 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7w8cv" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.052158 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.052231 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8m4mx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.052542 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4l5kz" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.056144 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwshs\" (UniqueName: \"kubernetes.io/projected/e66fa8da-eabe-4fe6-8689-961c09641552-kube-api-access-bwshs\") pod \"heat-operator-controller-manager-5d889d78cf-cv4rz\" (UID: \"e66fa8da-eabe-4fe6-8689-961c09641552\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.056439 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7j99\" (UniqueName: \"kubernetes.io/projected/7306cbd5-07f3-48a7-a865-752417bf2e8e-kube-api-access-l7j99\") pod \"designate-operator-controller-manager-84f4f7b77b-rnbs8\" (UID: \"7306cbd5-07f3-48a7-a865-752417bf2e8e\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.057452 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvk7n\" (UniqueName: \"kubernetes.io/projected/827de292-bc8c-40da-be5f-443d06e48782-kube-api-access-gvk7n\") pod \"glance-operator-controller-manager-84958c4d49-l56v9\" (UID: \"827de292-bc8c-40da-be5f-443d06e48782\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.059461 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rbw\" (UniqueName: \"kubernetes.io/projected/023a7a0e-9279-4b9b-ba5d-6cd41b2aa729-kube-api-access-p8rbw\") pod \"manila-operator-controller-manager-6d68dbc695-zcsfx\" (UID: \"023a7a0e-9279-4b9b-ba5d-6cd41b2aa729\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.061050 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.061225 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7lrz\" (UniqueName: \"kubernetes.io/projected/dfe403d1-c0bb-4570-8b27-714c65d930af-kube-api-access-w7lrz\") pod \"horizon-operator-controller-manager-9f4696d94-sr7pb\" (UID: \"dfe403d1-c0bb-4570-8b27-714c65d930af\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.065383 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.067229 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.074048 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.077859 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.079053 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.082489 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jk2sm" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.091592 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098161 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhlwc\" (UniqueName: \"kubernetes.io/projected/f4918ab0-3268-4081-bdf8-05df0b51e62b-kube-api-access-mhlwc\") pod \"nova-operator-controller-manager-64cd67b5cb-nlk7p\" (UID: \"f4918ab0-3268-4081-bdf8-05df0b51e62b\") " pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098194 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3732c646-2b59-4238-8466-4c9240bc5b9a-cert\") pod \"openstack-baremetal-operator-controller-manager-5869cb545-jwwlx\" (UID: \"3732c646-2b59-4238-8466-4c9240bc5b9a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098227 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvc7t\" (UniqueName: \"kubernetes.io/projected/165193eb-72d2-44c8-ad3c-12679db734a1-kube-api-access-pvc7t\") pod \"ovn-operator-controller-manager-9976ff44c-6r9zk\" (UID: \"165193eb-72d2-44c8-ad3c-12679db734a1\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098253 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxxht\" (UniqueName: \"kubernetes.io/projected/81a57946-838b-45e0-8a00-a7b50950db67-kube-api-access-hxxht\") pod \"neutron-operator-controller-manager-849d5b9b84-gx29d\" (UID: \"81a57946-838b-45e0-8a00-a7b50950db67\") " pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098280 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vp4j\" (UniqueName: \"kubernetes.io/projected/fb419c8a-047c-4df7-8120-25624030a3fe-kube-api-access-8vp4j\") pod \"placement-operator-controller-manager-589c58c6c-pvtws\" (UID: \"fb419c8a-047c-4df7-8120-25624030a3fe\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098299 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxfw\" (UniqueName: \"kubernetes.io/projected/3732c646-2b59-4238-8466-4c9240bc5b9a-kube-api-access-4wxfw\") pod \"openstack-baremetal-operator-controller-manager-5869cb545-jwwlx\" (UID: \"3732c646-2b59-4238-8466-4c9240bc5b9a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098342 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrw5\" (UniqueName: \"kubernetes.io/projected/57843ab0-f141-436e-847c-71f339bb736b-kube-api-access-pnrw5\") pod \"ironic-operator-controller-manager-5cd4858477-hfz8s\" (UID: \"57843ab0-f141-436e-847c-71f339bb736b\") " pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098774 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5pvl\" (UniqueName: \"kubernetes.io/projected/44910e65-f73b-4454-bd9d-8fbbfb18445c-kube-api-access-x5pvl\") pod \"octavia-operator-controller-manager-7b787867f4-5hpzt\" (UID: \"44910e65-f73b-4454-bd9d-8fbbfb18445c\") " pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098828 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9np7k\" (UniqueName: \"kubernetes.io/projected/9d557980-a1fc-4123-9a45-351264ad1fbc-kube-api-access-9np7k\") pod \"keystone-operator-controller-manager-5bd55b4bff-bbfd9\" (UID: \"9d557980-a1fc-4123-9a45-351264ad1fbc\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098851 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh96s\" (UniqueName: \"kubernetes.io/projected/dd3980d8-2ea7-4dd5-9604-9e09025e4220-kube-api-access-sh96s\") pod \"mariadb-operator-controller-manager-88c7-d7btl\" (UID: \"dd3980d8-2ea7-4dd5-9604-9e09025e4220\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-d7btl" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098961 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slwqc\" (UniqueName: \"kubernetes.io/projected/2d4f9b95-e805-4def-bd1c-35b262ebd01f-kube-api-access-slwqc\") pod \"infra-operator-controller-manager-9d6c5db85-nzp5n\" (UID: \"2d4f9b95-e805-4def-bd1c-35b262ebd01f\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.098997 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d4f9b95-e805-4def-bd1c-35b262ebd01f-cert\") pod \"infra-operator-controller-manager-9d6c5db85-nzp5n\" (UID: \"2d4f9b95-e805-4def-bd1c-35b262ebd01f\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" Oct 02 11:43:38 crc kubenswrapper[4725]: E1002 11:43:38.099151 4725 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 02 11:43:38 crc kubenswrapper[4725]: E1002 11:43:38.099198 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d4f9b95-e805-4def-bd1c-35b262ebd01f-cert podName:2d4f9b95-e805-4def-bd1c-35b262ebd01f nodeName:}" failed. No retries permitted until 2025-10-02 11:43:38.599180176 +0000 UTC m=+938.506679639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d4f9b95-e805-4def-bd1c-35b262ebd01f-cert") pod "infra-operator-controller-manager-9d6c5db85-nzp5n" (UID: "2d4f9b95-e805-4def-bd1c-35b262ebd01f") : secret "infra-operator-webhook-server-cert" not found Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.126442 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slwqc\" (UniqueName: \"kubernetes.io/projected/2d4f9b95-e805-4def-bd1c-35b262ebd01f-kube-api-access-slwqc\") pod \"infra-operator-controller-manager-9d6c5db85-nzp5n\" (UID: \"2d4f9b95-e805-4def-bd1c-35b262ebd01f\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.131892 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.133070 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.133177 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9np7k\" (UniqueName: \"kubernetes.io/projected/9d557980-a1fc-4123-9a45-351264ad1fbc-kube-api-access-9np7k\") pod \"keystone-operator-controller-manager-5bd55b4bff-bbfd9\" (UID: \"9d557980-a1fc-4123-9a45-351264ad1fbc\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.138945 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ptc8m" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.139996 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.141631 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrw5\" (UniqueName: \"kubernetes.io/projected/57843ab0-f141-436e-847c-71f339bb736b-kube-api-access-pnrw5\") pod \"ironic-operator-controller-manager-5cd4858477-hfz8s\" (UID: \"57843ab0-f141-436e-847c-71f339bb736b\") " pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.152214 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.161324 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-85777745bb-2mchd"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.162556 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.171732 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.172166 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-85777745bb-2mchd"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.175444 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xwk4j" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.200240 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5pvl\" (UniqueName: \"kubernetes.io/projected/44910e65-f73b-4454-bd9d-8fbbfb18445c-kube-api-access-x5pvl\") pod \"octavia-operator-controller-manager-7b787867f4-5hpzt\" (UID: \"44910e65-f73b-4454-bd9d-8fbbfb18445c\") " pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.200281 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8mpc\" (UniqueName: \"kubernetes.io/projected/c4d00c80-69fb-4507-9e14-2a54cdb0b8c5-kube-api-access-s8mpc\") pod \"swift-operator-controller-manager-84d6b4b759-25kj8\" (UID: \"c4d00c80-69fb-4507-9e14-2a54cdb0b8c5\") " pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.200300 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh96s\" (UniqueName: \"kubernetes.io/projected/dd3980d8-2ea7-4dd5-9604-9e09025e4220-kube-api-access-sh96s\") pod \"mariadb-operator-controller-manager-88c7-d7btl\" (UID: \"dd3980d8-2ea7-4dd5-9604-9e09025e4220\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-d7btl" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.200336 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfd2j\" (UniqueName: \"kubernetes.io/projected/6c738c27-b7d2-4e56-b0e5-61c19a279278-kube-api-access-rfd2j\") pod \"telemetry-operator-controller-manager-b8d54b5d7-fjs4g\" (UID: \"6c738c27-b7d2-4e56-b0e5-61c19a279278\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.200371 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhlwc\" (UniqueName: \"kubernetes.io/projected/f4918ab0-3268-4081-bdf8-05df0b51e62b-kube-api-access-mhlwc\") pod \"nova-operator-controller-manager-64cd67b5cb-nlk7p\" (UID: \"f4918ab0-3268-4081-bdf8-05df0b51e62b\") " pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.200388 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3732c646-2b59-4238-8466-4c9240bc5b9a-cert\") pod \"openstack-baremetal-operator-controller-manager-5869cb545-jwwlx\" (UID: \"3732c646-2b59-4238-8466-4c9240bc5b9a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.200432 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvc7t\" (UniqueName: \"kubernetes.io/projected/165193eb-72d2-44c8-ad3c-12679db734a1-kube-api-access-pvc7t\") pod \"ovn-operator-controller-manager-9976ff44c-6r9zk\" (UID: \"165193eb-72d2-44c8-ad3c-12679db734a1\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.200455 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxxht\" (UniqueName: \"kubernetes.io/projected/81a57946-838b-45e0-8a00-a7b50950db67-kube-api-access-hxxht\") pod \"neutron-operator-controller-manager-849d5b9b84-gx29d\" (UID: \"81a57946-838b-45e0-8a00-a7b50950db67\") " pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.200474 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vp4j\" (UniqueName: \"kubernetes.io/projected/fb419c8a-047c-4df7-8120-25624030a3fe-kube-api-access-8vp4j\") pod \"placement-operator-controller-manager-589c58c6c-pvtws\" (UID: \"fb419c8a-047c-4df7-8120-25624030a3fe\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.200494 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wxfw\" (UniqueName: \"kubernetes.io/projected/3732c646-2b59-4238-8466-4c9240bc5b9a-kube-api-access-4wxfw\") pod \"openstack-baremetal-operator-controller-manager-5869cb545-jwwlx\" (UID: \"3732c646-2b59-4238-8466-4c9240bc5b9a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" Oct 02 11:43:38 crc kubenswrapper[4725]: E1002 11:43:38.200856 4725 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 02 11:43:38 crc kubenswrapper[4725]: E1002 11:43:38.200892 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3732c646-2b59-4238-8466-4c9240bc5b9a-cert podName:3732c646-2b59-4238-8466-4c9240bc5b9a nodeName:}" failed. No retries permitted until 2025-10-02 11:43:38.700880382 +0000 UTC m=+938.608379835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3732c646-2b59-4238-8466-4c9240bc5b9a-cert") pod "openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" (UID: "3732c646-2b59-4238-8466-4c9240bc5b9a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.214568 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.215759 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.219537 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sxmnx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.228811 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.238630 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvc7t\" (UniqueName: \"kubernetes.io/projected/165193eb-72d2-44c8-ad3c-12679db734a1-kube-api-access-pvc7t\") pod \"ovn-operator-controller-manager-9976ff44c-6r9zk\" (UID: \"165193eb-72d2-44c8-ad3c-12679db734a1\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.239537 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wxfw\" (UniqueName: \"kubernetes.io/projected/3732c646-2b59-4238-8466-4c9240bc5b9a-kube-api-access-4wxfw\") pod \"openstack-baremetal-operator-controller-manager-5869cb545-jwwlx\" (UID: \"3732c646-2b59-4238-8466-4c9240bc5b9a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.243854 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhlwc\" (UniqueName: \"kubernetes.io/projected/f4918ab0-3268-4081-bdf8-05df0b51e62b-kube-api-access-mhlwc\") pod \"nova-operator-controller-manager-64cd67b5cb-nlk7p\" (UID: \"f4918ab0-3268-4081-bdf8-05df0b51e62b\") " pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.243898 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5pvl\" (UniqueName: \"kubernetes.io/projected/44910e65-f73b-4454-bd9d-8fbbfb18445c-kube-api-access-x5pvl\") pod \"octavia-operator-controller-manager-7b787867f4-5hpzt\" (UID: \"44910e65-f73b-4454-bd9d-8fbbfb18445c\") " pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.245016 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vp4j\" (UniqueName: \"kubernetes.io/projected/fb419c8a-047c-4df7-8120-25624030a3fe-kube-api-access-8vp4j\") pod \"placement-operator-controller-manager-589c58c6c-pvtws\" (UID: \"fb419c8a-047c-4df7-8120-25624030a3fe\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.247207 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxxht\" (UniqueName: \"kubernetes.io/projected/81a57946-838b-45e0-8a00-a7b50950db67-kube-api-access-hxxht\") pod \"neutron-operator-controller-manager-849d5b9b84-gx29d\" (UID: \"81a57946-838b-45e0-8a00-a7b50950db67\") " pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.252467 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh96s\" (UniqueName: \"kubernetes.io/projected/dd3980d8-2ea7-4dd5-9604-9e09025e4220-kube-api-access-sh96s\") pod \"mariadb-operator-controller-manager-88c7-d7btl\" (UID: \"dd3980d8-2ea7-4dd5-9604-9e09025e4220\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-d7btl" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.264996 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.299169 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.301479 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d2cf\" (UniqueName: \"kubernetes.io/projected/67d56c77-e0a6-4841-9c57-2afc39fcf9db-kube-api-access-2d2cf\") pod \"watcher-operator-controller-manager-6b9957f54f-fp4qx\" (UID: \"67d56c77-e0a6-4841-9c57-2afc39fcf9db\") " pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.301544 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8mpc\" (UniqueName: \"kubernetes.io/projected/c4d00c80-69fb-4507-9e14-2a54cdb0b8c5-kube-api-access-s8mpc\") pod \"swift-operator-controller-manager-84d6b4b759-25kj8\" (UID: \"c4d00c80-69fb-4507-9e14-2a54cdb0b8c5\") " pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.301573 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsmtd\" (UniqueName: \"kubernetes.io/projected/d3b254cf-3771-426e-9211-9cd279379d73-kube-api-access-rsmtd\") pod \"test-operator-controller-manager-85777745bb-2mchd\" (UID: \"d3b254cf-3771-426e-9211-9cd279379d73\") " pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.301624 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfd2j\" (UniqueName: \"kubernetes.io/projected/6c738c27-b7d2-4e56-b0e5-61c19a279278-kube-api-access-rfd2j\") pod \"telemetry-operator-controller-manager-b8d54b5d7-fjs4g\" (UID: \"6c738c27-b7d2-4e56-b0e5-61c19a279278\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.303325 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.332442 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8mpc\" (UniqueName: \"kubernetes.io/projected/c4d00c80-69fb-4507-9e14-2a54cdb0b8c5-kube-api-access-s8mpc\") pod \"swift-operator-controller-manager-84d6b4b759-25kj8\" (UID: \"c4d00c80-69fb-4507-9e14-2a54cdb0b8c5\") " pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.334060 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.334238 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.342511 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfd2j\" (UniqueName: \"kubernetes.io/projected/6c738c27-b7d2-4e56-b0e5-61c19a279278-kube-api-access-rfd2j\") pod \"telemetry-operator-controller-manager-b8d54b5d7-fjs4g\" (UID: \"6c738c27-b7d2-4e56-b0e5-61c19a279278\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.399415 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.404240 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d2cf\" (UniqueName: \"kubernetes.io/projected/67d56c77-e0a6-4841-9c57-2afc39fcf9db-kube-api-access-2d2cf\") pod \"watcher-operator-controller-manager-6b9957f54f-fp4qx\" (UID: \"67d56c77-e0a6-4841-9c57-2afc39fcf9db\") " pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.404318 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsmtd\" (UniqueName: \"kubernetes.io/projected/d3b254cf-3771-426e-9211-9cd279379d73-kube-api-access-rsmtd\") pod \"test-operator-controller-manager-85777745bb-2mchd\" (UID: \"d3b254cf-3771-426e-9211-9cd279379d73\") " pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.420039 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.436491 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.446665 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.450942 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d2cf\" (UniqueName: \"kubernetes.io/projected/67d56c77-e0a6-4841-9c57-2afc39fcf9db-kube-api-access-2d2cf\") pod \"watcher-operator-controller-manager-6b9957f54f-fp4qx\" (UID: \"67d56c77-e0a6-4841-9c57-2afc39fcf9db\") " pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.453275 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsmtd\" (UniqueName: \"kubernetes.io/projected/d3b254cf-3771-426e-9211-9cd279379d73-kube-api-access-rsmtd\") pod \"test-operator-controller-manager-85777745bb-2mchd\" (UID: \"d3b254cf-3771-426e-9211-9cd279379d73\") " pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.456594 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-q85gs" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.456783 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.466500 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.506218 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4t84\" (UniqueName: \"kubernetes.io/projected/caab214a-7c5d-4d45-bebe-680090c291d8-kube-api-access-c4t84\") pod \"openstack-operator-controller-manager-8479857cf7-b2ttm\" (UID: \"caab214a-7c5d-4d45-bebe-680090c291d8\") " pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.506285 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/caab214a-7c5d-4d45-bebe-680090c291d8-cert\") pod \"openstack-operator-controller-manager-8479857cf7-b2ttm\" (UID: \"caab214a-7c5d-4d45-bebe-680090c291d8\") " pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.515418 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.534317 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.542153 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-88c7-d7btl" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.577756 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.586943 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.587407 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.591158 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.594771 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.596604 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.599601 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.604975 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-sv9sw" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.607262 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d4f9b95-e805-4def-bd1c-35b262ebd01f-cert\") pod \"infra-operator-controller-manager-9d6c5db85-nzp5n\" (UID: \"2d4f9b95-e805-4def-bd1c-35b262ebd01f\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.607342 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4t84\" (UniqueName: \"kubernetes.io/projected/caab214a-7c5d-4d45-bebe-680090c291d8-kube-api-access-c4t84\") pod \"openstack-operator-controller-manager-8479857cf7-b2ttm\" (UID: \"caab214a-7c5d-4d45-bebe-680090c291d8\") " pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.607381 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/caab214a-7c5d-4d45-bebe-680090c291d8-cert\") pod \"openstack-operator-controller-manager-8479857cf7-b2ttm\" (UID: \"caab214a-7c5d-4d45-bebe-680090c291d8\") " pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:43:38 crc kubenswrapper[4725]: E1002 11:43:38.607515 4725 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 02 11:43:38 crc kubenswrapper[4725]: E1002 11:43:38.607562 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caab214a-7c5d-4d45-bebe-680090c291d8-cert podName:caab214a-7c5d-4d45-bebe-680090c291d8 nodeName:}" failed. No retries permitted until 2025-10-02 11:43:39.107548931 +0000 UTC m=+939.015048394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/caab214a-7c5d-4d45-bebe-680090c291d8-cert") pod "openstack-operator-controller-manager-8479857cf7-b2ttm" (UID: "caab214a-7c5d-4d45-bebe-680090c291d8") : secret "webhook-server-cert" not found Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.613028 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d4f9b95-e805-4def-bd1c-35b262ebd01f-cert\") pod \"infra-operator-controller-manager-9d6c5db85-nzp5n\" (UID: \"2d4f9b95-e805-4def-bd1c-35b262ebd01f\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.615718 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.616161 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.673078 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.678370 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4t84\" (UniqueName: \"kubernetes.io/projected/caab214a-7c5d-4d45-bebe-680090c291d8-kube-api-access-c4t84\") pod \"openstack-operator-controller-manager-8479857cf7-b2ttm\" (UID: \"caab214a-7c5d-4d45-bebe-680090c291d8\") " pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.695890 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.703384 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.718954 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3732c646-2b59-4238-8466-4c9240bc5b9a-cert\") pod \"openstack-baremetal-operator-controller-manager-5869cb545-jwwlx\" (UID: \"3732c646-2b59-4238-8466-4c9240bc5b9a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.719020 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk7g7\" (UniqueName: \"kubernetes.io/projected/53be820c-d953-4996-96da-4cec8d6b3bf0-kube-api-access-dk7g7\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-jhggf\" (UID: \"53be820c-d953-4996-96da-4cec8d6b3bf0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.737683 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dths"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.740736 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3732c646-2b59-4238-8466-4c9240bc5b9a-cert\") pod \"openstack-baremetal-operator-controller-manager-5869cb545-jwwlx\" (UID: \"3732c646-2b59-4238-8466-4c9240bc5b9a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.821415 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk7g7\" (UniqueName: \"kubernetes.io/projected/53be820c-d953-4996-96da-4cec8d6b3bf0-kube-api-access-dk7g7\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-jhggf\" (UID: \"53be820c-d953-4996-96da-4cec8d6b3bf0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.854625 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk7g7\" (UniqueName: \"kubernetes.io/projected/53be820c-d953-4996-96da-4cec8d6b3bf0-kube-api-access-dk7g7\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-jhggf\" (UID: \"53be820c-d953-4996-96da-4cec8d6b3bf0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.918756 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf" Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.967499 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx"] Oct 02 11:43:38 crc kubenswrapper[4725]: I1002 11:43:38.979242 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9"] Oct 02 11:43:39 crc kubenswrapper[4725]: W1002 11:43:39.002518 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod023a7a0e_9279_4b9b_ba5d_6cd41b2aa729.slice/crio-a5f045377f4666adf0c0e04fe9e8a584c6ecfa52aad834e877c8b77f38aa0f37 WatchSource:0}: Error finding container a5f045377f4666adf0c0e04fe9e8a584c6ecfa52aad834e877c8b77f38aa0f37: Status 404 returned error can't find the container with id a5f045377f4666adf0c0e04fe9e8a584c6ecfa52aad834e877c8b77f38aa0f37 Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.029697 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.124815 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/caab214a-7c5d-4d45-bebe-680090c291d8-cert\") pod \"openstack-operator-controller-manager-8479857cf7-b2ttm\" (UID: \"caab214a-7c5d-4d45-bebe-680090c291d8\") " pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.124994 4725 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.125055 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caab214a-7c5d-4d45-bebe-680090c291d8-cert podName:caab214a-7c5d-4d45-bebe-680090c291d8 nodeName:}" failed. No retries permitted until 2025-10-02 11:43:40.125040026 +0000 UTC m=+940.032539489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/caab214a-7c5d-4d45-bebe-680090c291d8-cert") pod "openstack-operator-controller-manager-8479857cf7-b2ttm" (UID: "caab214a-7c5d-4d45-bebe-680090c291d8") : secret "webhook-server-cert" not found Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.396038 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d"] Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.417006 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s"] Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.437935 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p"] Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.450007 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn" event={"ID":"a51da7c1-9136-40c8-851a-f7c2d1f7a644","Type":"ContainerStarted","Data":"27927ff680309d8d355d4b412e7c3db13578de05cbd0e7265026ce1460f210bb"} Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.451601 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9" event={"ID":"9d557980-a1fc-4123-9a45-351264ad1fbc","Type":"ContainerStarted","Data":"b57f8b008135b57a9cc6711bfbdf8d5e5ebb521a75feb0b1e325fbeb6472c4d3"} Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.452656 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb" event={"ID":"dfe403d1-c0bb-4570-8b27-714c65d930af","Type":"ContainerStarted","Data":"d9659944d8f452afaaf127c94afd3902d790fc8028467d90e25867d116d0dce6"} Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.456853 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d" event={"ID":"81a57946-838b-45e0-8a00-a7b50950db67","Type":"ContainerStarted","Data":"92c2655c128bb8a972b92dad3cdf44b250d3cf736f57fc77f7b4629f87f01e2f"} Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.460449 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx" event={"ID":"023a7a0e-9279-4b9b-ba5d-6cd41b2aa729","Type":"ContainerStarted","Data":"a5f045377f4666adf0c0e04fe9e8a584c6ecfa52aad834e877c8b77f38aa0f37"} Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.461647 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6" event={"ID":"2a1bf314-ad40-4055-8373-b05888c06791","Type":"ContainerStarted","Data":"837a1067c688dbfce829139b7973b0fe909fbaa80a4e4eee2ae227a84fb12e37"} Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.478642 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz"] Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.483179 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx"] Oct 02 11:43:39 crc kubenswrapper[4725]: W1002 11:43:39.499255 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd3980d8_2ea7_4dd5_9604_9e09025e4220.slice/crio-313b70f3cbe2a5cafc4036debbd0c53527887bdba751668d2e09819a22f2859b WatchSource:0}: Error finding container 313b70f3cbe2a5cafc4036debbd0c53527887bdba751668d2e09819a22f2859b: Status 404 returned error can't find the container with id 313b70f3cbe2a5cafc4036debbd0c53527887bdba751668d2e09819a22f2859b Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.501364 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-d7btl"] Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.501422 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8"] Oct 02 11:43:39 crc kubenswrapper[4725]: W1002 11:43:39.505325 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod165193eb_72d2_44c8_ad3c_12679db734a1.slice/crio-2f9d3575f51dba2f96cda86146d5df106ca20fc74b9f81cf4e0c708d99b2f97c WatchSource:0}: Error finding container 2f9d3575f51dba2f96cda86146d5df106ca20fc74b9f81cf4e0c708d99b2f97c: Status 404 returned error can't find the container with id 2f9d3575f51dba2f96cda86146d5df106ca20fc74b9f81cf4e0c708d99b2f97c Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.508050 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk"] Oct 02 11:43:39 crc kubenswrapper[4725]: W1002 11:43:39.508836 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7306cbd5_07f3_48a7_a865_752417bf2e8e.slice/crio-b127093b6706332455e5af577f51d666bcb71a883f7f1694b62650914e1cd326 WatchSource:0}: Error finding container b127093b6706332455e5af577f51d666bcb71a883f7f1694b62650914e1cd326: Status 404 returned error can't find the container with id b127093b6706332455e5af577f51d666bcb71a883f7f1694b62650914e1cd326 Oct 02 11:43:39 crc kubenswrapper[4725]: W1002 11:43:39.511125 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44910e65_f73b_4454_bd9d_8fbbfb18445c.slice/crio-aee19e9184aef201a72f6ff035b752e14e4e5fc47941a2173cecdabe8167db76 WatchSource:0}: Error finding container aee19e9184aef201a72f6ff035b752e14e4e5fc47941a2173cecdabe8167db76: Status 404 returned error can't find the container with id aee19e9184aef201a72f6ff035b752e14e4e5fc47941a2173cecdabe8167db76 Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.518530 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt"] Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.681814 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws"] Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.707042 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8"] Oct 02 11:43:39 crc kubenswrapper[4725]: W1002 11:43:39.709106 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb419c8a_047c_4df7_8120_25624030a3fe.slice/crio-dfc63c9b230b749251488772bd2d09c2f34d64e3d793d609a1c475c5e911d126 WatchSource:0}: Error finding container dfc63c9b230b749251488772bd2d09c2f34d64e3d793d609a1c475c5e911d126: Status 404 returned error can't find the container with id dfc63c9b230b749251488772bd2d09c2f34d64e3d793d609a1c475c5e911d126 Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.718542 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vp4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-589c58c6c-pvtws_openstack-operators(fb419c8a-047c-4df7-8120-25624030a3fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.720571 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n"] Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.721229 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:3f96f0843934236c261db73dacb50fc12a288890562ee4ebdc9ec22360937cd3,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slwqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-9d6c5db85-nzp5n_openstack-operators(2d4f9b95-e805-4def-bd1c-35b262ebd01f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.726300 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf"] Oct 02 11:43:39 crc kubenswrapper[4725]: W1002 11:43:39.729914 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53be820c_d953_4996_96da_4cec8d6b3bf0.slice/crio-169f8225104a3c849cf79421fafb18041b95dbf087fe24b59f099d634161cd19 WatchSource:0}: Error finding container 169f8225104a3c849cf79421fafb18041b95dbf087fe24b59f099d634161cd19: Status 404 returned error can't find the container with id 169f8225104a3c849cf79421fafb18041b95dbf087fe24b59f099d634161cd19 Oct 02 11:43:39 crc kubenswrapper[4725]: W1002 11:43:39.732015 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3b254cf_3771_426e_9211_9cd279379d73.slice/crio-3ce2d53c3a02c0669da81985cfe76fd26d8dd1abd4a1ad70b570720ec32adb9e WatchSource:0}: Error finding container 3ce2d53c3a02c0669da81985cfe76fd26d8dd1abd4a1ad70b570720ec32adb9e: Status 404 returned error can't find the container with id 3ce2d53c3a02c0669da81985cfe76fd26d8dd1abd4a1ad70b570720ec32adb9e Oct 02 11:43:39 crc kubenswrapper[4725]: W1002 11:43:39.733415 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c738c27_b7d2_4e56_b0e5_61c19a279278.slice/crio-84de1d358778a9411748c7bb4c1764c42ccc44a1deb754c4d0fb001afd6d4e59 WatchSource:0}: Error finding container 84de1d358778a9411748c7bb4c1764c42ccc44a1deb754c4d0fb001afd6d4e59: Status 404 returned error can't find the container with id 84de1d358778a9411748c7bb4c1764c42ccc44a1deb754c4d0fb001afd6d4e59 Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.735324 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dk7g7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-jhggf_openstack-operators(53be820c-d953-4996-96da-4cec8d6b3bf0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.736442 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf" podUID="53be820c-d953-4996-96da-4cec8d6b3bf0" Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.737119 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9"] Oct 02 11:43:39 crc kubenswrapper[4725]: W1002 11:43:39.738835 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod827de292_bc8c_40da_be5f_443d06e48782.slice/crio-534a4b95ee15939bbaf5196b64ed29449c474d9dfd61c0a00568937eb194b6e5 WatchSource:0}: Error finding container 534a4b95ee15939bbaf5196b64ed29449c474d9dfd61c0a00568937eb194b6e5: Status 404 returned error can't find the container with id 534a4b95ee15939bbaf5196b64ed29449c474d9dfd61c0a00568937eb194b6e5 Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.739279 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f61fdfbfd12027ce6b4e7ad553ec0582f080de0cfb472de6dc04ad3078bb17e3,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rsmtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-85777745bb-2mchd_openstack-operators(d3b254cf-3771-426e-9211-9cd279379d73): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.740632 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:8fdf377daf05e2fa7346505017078fa81981dd945bf635a64c8022633c68118f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rfd2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-b8d54b5d7-fjs4g_openstack-operators(6c738c27-b7d2-4e56-b0e5-61c19a279278): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.743622 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:21792a2317c0a55e40b2a02a7d5d4682b76538ed2a2e0633199aa395e60ecc72,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gvk7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-84958c4d49-l56v9_openstack-operators(827de292-bc8c-40da-be5f-443d06e48782): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.746708 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-85777745bb-2mchd"] Oct 02 11:43:39 crc kubenswrapper[4725]: W1002 11:43:39.749620 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3732c646_2b59_4238_8466_4c9240bc5b9a.slice/crio-de60d090584d7119544ec35ee1a536e5f6e037ec791c49e1a3f90a5b7b19b099 WatchSource:0}: Error finding container de60d090584d7119544ec35ee1a536e5f6e037ec791c49e1a3f90a5b7b19b099: Status 404 returned error can't find the container with id de60d090584d7119544ec35ee1a536e5f6e037ec791c49e1a3f90a5b7b19b099 Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.756189 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g"] Oct 02 11:43:39 crc kubenswrapper[4725]: I1002 11:43:39.761140 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx"] Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.775135 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e7cfed051c1cf801e651fd4035070e38698039f284ac0b2a0332769fdbb4a9c8,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4wxfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-5869cb545-jwwlx_openstack-operators(3732c646-2b59-4238-8466-4c9240bc5b9a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.955284 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" podUID="2d4f9b95-e805-4def-bd1c-35b262ebd01f" Oct 02 11:43:39 crc kubenswrapper[4725]: E1002 11:43:39.955666 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" podUID="fb419c8a-047c-4df7-8120-25624030a3fe" Oct 02 11:43:40 crc kubenswrapper[4725]: E1002 11:43:40.044165 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" podUID="d3b254cf-3771-426e-9211-9cd279379d73" Oct 02 11:43:40 crc kubenswrapper[4725]: E1002 11:43:40.047054 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" podUID="827de292-bc8c-40da-be5f-443d06e48782" Oct 02 11:43:40 crc kubenswrapper[4725]: E1002 11:43:40.067799 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" podUID="3732c646-2b59-4238-8466-4c9240bc5b9a" Oct 02 11:43:40 crc kubenswrapper[4725]: E1002 11:43:40.078320 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" podUID="6c738c27-b7d2-4e56-b0e5-61c19a279278" Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.143166 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/caab214a-7c5d-4d45-bebe-680090c291d8-cert\") pod \"openstack-operator-controller-manager-8479857cf7-b2ttm\" (UID: \"caab214a-7c5d-4d45-bebe-680090c291d8\") " pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.169172 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/caab214a-7c5d-4d45-bebe-680090c291d8-cert\") pod \"openstack-operator-controller-manager-8479857cf7-b2ttm\" (UID: \"caab214a-7c5d-4d45-bebe-680090c291d8\") " pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.286064 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.497624 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" event={"ID":"d3b254cf-3771-426e-9211-9cd279379d73","Type":"ContainerStarted","Data":"88cb07a28b8c5d480f903523149e97b0ec05818c106e974d20d8f513591fb168"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.498096 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" event={"ID":"d3b254cf-3771-426e-9211-9cd279379d73","Type":"ContainerStarted","Data":"3ce2d53c3a02c0669da81985cfe76fd26d8dd1abd4a1ad70b570720ec32adb9e"} Oct 02 11:43:40 crc kubenswrapper[4725]: E1002 11:43:40.505425 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f61fdfbfd12027ce6b4e7ad553ec0582f080de0cfb472de6dc04ad3078bb17e3\\\"\"" pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" podUID="d3b254cf-3771-426e-9211-9cd279379d73" Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.516420 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8" event={"ID":"c4d00c80-69fb-4507-9e14-2a54cdb0b8c5","Type":"ContainerStarted","Data":"efab06656652de908f496fe17dcb6e86acd08478a70e776a158b3359033d6b7e"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.519846 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-d7btl" event={"ID":"dd3980d8-2ea7-4dd5-9604-9e09025e4220","Type":"ContainerStarted","Data":"313b70f3cbe2a5cafc4036debbd0c53527887bdba751668d2e09819a22f2859b"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.559906 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" event={"ID":"827de292-bc8c-40da-be5f-443d06e48782","Type":"ContainerStarted","Data":"c6fea2e922bb30aad4c17f388e516fcad17351bac8aa9978157339a911fb4d5b"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.559966 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" event={"ID":"827de292-bc8c-40da-be5f-443d06e48782","Type":"ContainerStarted","Data":"534a4b95ee15939bbaf5196b64ed29449c474d9dfd61c0a00568937eb194b6e5"} Oct 02 11:43:40 crc kubenswrapper[4725]: E1002 11:43:40.563321 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:21792a2317c0a55e40b2a02a7d5d4682b76538ed2a2e0633199aa395e60ecc72\\\"\"" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" podUID="827de292-bc8c-40da-be5f-443d06e48782" Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.563578 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" event={"ID":"44910e65-f73b-4454-bd9d-8fbbfb18445c","Type":"ContainerStarted","Data":"aee19e9184aef201a72f6ff035b752e14e4e5fc47941a2173cecdabe8167db76"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.569672 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" event={"ID":"3732c646-2b59-4238-8466-4c9240bc5b9a","Type":"ContainerStarted","Data":"e4096b2cf519810881432f02534b0223b17f8ab8d0789dc2d7ac358d673cdffa"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.569746 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" event={"ID":"3732c646-2b59-4238-8466-4c9240bc5b9a","Type":"ContainerStarted","Data":"de60d090584d7119544ec35ee1a536e5f6e037ec791c49e1a3f90a5b7b19b099"} Oct 02 11:43:40 crc kubenswrapper[4725]: E1002 11:43:40.572979 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e7cfed051c1cf801e651fd4035070e38698039f284ac0b2a0332769fdbb4a9c8\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" podUID="3732c646-2b59-4238-8466-4c9240bc5b9a" Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.573879 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8" event={"ID":"7306cbd5-07f3-48a7-a865-752417bf2e8e","Type":"ContainerStarted","Data":"b127093b6706332455e5af577f51d666bcb71a883f7f1694b62650914e1cd326"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.577412 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s" event={"ID":"57843ab0-f141-436e-847c-71f339bb736b","Type":"ContainerStarted","Data":"dfaf8c763e71096fc00cf2109466f7d8f6e7015b7ff95e134688684ebe1029a8"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.583521 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx" event={"ID":"67d56c77-e0a6-4841-9c57-2afc39fcf9db","Type":"ContainerStarted","Data":"fe420186861941fdc645fcb6441ec9a52e5c4b35b909294edac0904048dd1e78"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.609483 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" event={"ID":"6c738c27-b7d2-4e56-b0e5-61c19a279278","Type":"ContainerStarted","Data":"c57ffec9ad2644f62ecf92b827e2b493b5f45b0750cbab2cd2ea13137885a971"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.609524 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" event={"ID":"6c738c27-b7d2-4e56-b0e5-61c19a279278","Type":"ContainerStarted","Data":"84de1d358778a9411748c7bb4c1764c42ccc44a1deb754c4d0fb001afd6d4e59"} Oct 02 11:43:40 crc kubenswrapper[4725]: E1002 11:43:40.621014 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:8fdf377daf05e2fa7346505017078fa81981dd945bf635a64c8022633c68118f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" podUID="6c738c27-b7d2-4e56-b0e5-61c19a279278" Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.639287 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" event={"ID":"fb419c8a-047c-4df7-8120-25624030a3fe","Type":"ContainerStarted","Data":"f1b28daf01a88a39961abc991f8d3a8f90133ef2cd3d279df22e569d865d21c6"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.639350 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" event={"ID":"fb419c8a-047c-4df7-8120-25624030a3fe","Type":"ContainerStarted","Data":"dfc63c9b230b749251488772bd2d09c2f34d64e3d793d609a1c475c5e911d126"} Oct 02 11:43:40 crc kubenswrapper[4725]: E1002 11:43:40.645585 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2\\\"\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" podUID="fb419c8a-047c-4df7-8120-25624030a3fe" Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.650239 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" event={"ID":"2d4f9b95-e805-4def-bd1c-35b262ebd01f","Type":"ContainerStarted","Data":"3f84d12f6b0929b7ae7b1818168dd2030852ede73a34ceaf52975194a1a2fb49"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.650288 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" event={"ID":"2d4f9b95-e805-4def-bd1c-35b262ebd01f","Type":"ContainerStarted","Data":"fbb47327abed312398cbdc56eedd25013689e1bd19b18686c62b713d9bdef38e"} Oct 02 11:43:40 crc kubenswrapper[4725]: E1002 11:43:40.655967 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:3f96f0843934236c261db73dacb50fc12a288890562ee4ebdc9ec22360937cd3\\\"\"" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" podUID="2d4f9b95-e805-4def-bd1c-35b262ebd01f" Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.658566 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p" event={"ID":"f4918ab0-3268-4081-bdf8-05df0b51e62b","Type":"ContainerStarted","Data":"378fddf33785931431dc9a9aebf877d10fa9496c05be1f8b076065d651e0be26"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.658620 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk" event={"ID":"165193eb-72d2-44c8-ad3c-12679db734a1","Type":"ContainerStarted","Data":"2f9d3575f51dba2f96cda86146d5df106ca20fc74b9f81cf4e0c708d99b2f97c"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.660037 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz" event={"ID":"e66fa8da-eabe-4fe6-8689-961c09641552","Type":"ContainerStarted","Data":"0fcb065cead37c27cbfaf1eda11f071fcb31ad502ba14056890a38aed0aa65f5"} Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.661343 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4dths" podUID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerName="registry-server" containerID="cri-o://1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9" gracePeriod=2 Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.661471 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf" event={"ID":"53be820c-d953-4996-96da-4cec8d6b3bf0","Type":"ContainerStarted","Data":"169f8225104a3c849cf79421fafb18041b95dbf087fe24b59f099d634161cd19"} Oct 02 11:43:40 crc kubenswrapper[4725]: E1002 11:43:40.664051 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf" podUID="53be820c-d953-4996-96da-4cec8d6b3bf0" Oct 02 11:43:40 crc kubenswrapper[4725]: I1002 11:43:40.824300 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm"] Oct 02 11:43:41 crc kubenswrapper[4725]: I1002 11:43:41.683166 4725 generic.go:334] "Generic (PLEG): container finished" podID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerID="1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9" exitCode=0 Oct 02 11:43:41 crc kubenswrapper[4725]: I1002 11:43:41.683240 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dths" event={"ID":"079de0fb-5d10-4dae-8b54-ce1169f08123","Type":"ContainerDied","Data":"1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9"} Oct 02 11:43:41 crc kubenswrapper[4725]: E1002 11:43:41.686188 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f61fdfbfd12027ce6b4e7ad553ec0582f080de0cfb472de6dc04ad3078bb17e3\\\"\"" pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" podUID="d3b254cf-3771-426e-9211-9cd279379d73" Oct 02 11:43:41 crc kubenswrapper[4725]: E1002 11:43:41.686262 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf" podUID="53be820c-d953-4996-96da-4cec8d6b3bf0" Oct 02 11:43:41 crc kubenswrapper[4725]: E1002 11:43:41.689937 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:8fdf377daf05e2fa7346505017078fa81981dd945bf635a64c8022633c68118f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" podUID="6c738c27-b7d2-4e56-b0e5-61c19a279278" Oct 02 11:43:41 crc kubenswrapper[4725]: E1002 11:43:41.690121 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:3f96f0843934236c261db73dacb50fc12a288890562ee4ebdc9ec22360937cd3\\\"\"" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" podUID="2d4f9b95-e805-4def-bd1c-35b262ebd01f" Oct 02 11:43:41 crc kubenswrapper[4725]: E1002 11:43:41.691207 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2\\\"\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" podUID="fb419c8a-047c-4df7-8120-25624030a3fe" Oct 02 11:43:41 crc kubenswrapper[4725]: E1002 11:43:41.691510 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:21792a2317c0a55e40b2a02a7d5d4682b76538ed2a2e0633199aa395e60ecc72\\\"\"" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" podUID="827de292-bc8c-40da-be5f-443d06e48782" Oct 02 11:43:41 crc kubenswrapper[4725]: E1002 11:43:41.692381 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e7cfed051c1cf801e651fd4035070e38698039f284ac0b2a0332769fdbb4a9c8\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" podUID="3732c646-2b59-4238-8466-4c9240bc5b9a" Oct 02 11:43:47 crc kubenswrapper[4725]: E1002 11:43:47.548619 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9 is running failed: container process not found" containerID="1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9" cmd=["grpc_health_probe","-addr=:50051"] Oct 02 11:43:47 crc kubenswrapper[4725]: E1002 11:43:47.549877 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9 is running failed: container process not found" containerID="1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9" cmd=["grpc_health_probe","-addr=:50051"] Oct 02 11:43:47 crc kubenswrapper[4725]: E1002 11:43:47.550524 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9 is running failed: container process not found" containerID="1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9" cmd=["grpc_health_probe","-addr=:50051"] Oct 02 11:43:47 crc kubenswrapper[4725]: E1002 11:43:47.550593 4725 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4dths" podUID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerName="registry-server" Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.622152 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.752322 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dths" event={"ID":"079de0fb-5d10-4dae-8b54-ce1169f08123","Type":"ContainerDied","Data":"d374dacfb55177ca4580ca026418a0bed766f05b04ae14c9981dd1cce98e5819"} Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.752381 4725 scope.go:117] "RemoveContainer" containerID="1241e482c5546a67a25e393c665d05e9137a02429ffc3908e162672e0cf417f9" Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.752514 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dths" Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.753692 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" event={"ID":"caab214a-7c5d-4d45-bebe-680090c291d8","Type":"ContainerStarted","Data":"efe110534d324acbaf5d86cd4950379ce811a9694e00e971c02a0793f7ded2c9"} Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.817570 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-catalog-content\") pod \"079de0fb-5d10-4dae-8b54-ce1169f08123\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.817644 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kbf5\" (UniqueName: \"kubernetes.io/projected/079de0fb-5d10-4dae-8b54-ce1169f08123-kube-api-access-4kbf5\") pod \"079de0fb-5d10-4dae-8b54-ce1169f08123\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.817672 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-utilities\") pod \"079de0fb-5d10-4dae-8b54-ce1169f08123\" (UID: \"079de0fb-5d10-4dae-8b54-ce1169f08123\") " Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.818696 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-utilities" (OuterVolumeSpecName: "utilities") pod "079de0fb-5d10-4dae-8b54-ce1169f08123" (UID: "079de0fb-5d10-4dae-8b54-ce1169f08123"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.829094 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "079de0fb-5d10-4dae-8b54-ce1169f08123" (UID: "079de0fb-5d10-4dae-8b54-ce1169f08123"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.833582 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/079de0fb-5d10-4dae-8b54-ce1169f08123-kube-api-access-4kbf5" (OuterVolumeSpecName: "kube-api-access-4kbf5") pod "079de0fb-5d10-4dae-8b54-ce1169f08123" (UID: "079de0fb-5d10-4dae-8b54-ce1169f08123"). InnerVolumeSpecName "kube-api-access-4kbf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.920464 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.920534 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kbf5\" (UniqueName: \"kubernetes.io/projected/079de0fb-5d10-4dae-8b54-ce1169f08123-kube-api-access-4kbf5\") on node \"crc\" DevicePath \"\"" Oct 02 11:43:50 crc kubenswrapper[4725]: I1002 11:43:50.920650 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079de0fb-5d10-4dae-8b54-ce1169f08123-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:43:51 crc kubenswrapper[4725]: I1002 11:43:51.084599 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dths"] Oct 02 11:43:51 crc kubenswrapper[4725]: I1002 11:43:51.090210 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dths"] Oct 02 11:43:51 crc kubenswrapper[4725]: I1002 11:43:51.275526 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="079de0fb-5d10-4dae-8b54-ce1169f08123" path="/var/lib/kubelet/pods/079de0fb-5d10-4dae-8b54-ce1169f08123/volumes" Oct 02 11:43:51 crc kubenswrapper[4725]: E1002 11:43:51.421840 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e1328760310f3bbf4548b8b1268cd711087dd91212b92bb0be287cad1f1b6fe9" Oct 02 11:43:51 crc kubenswrapper[4725]: E1002 11:43:51.422061 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e1328760310f3bbf4548b8b1268cd711087dd91212b92bb0be287cad1f1b6fe9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x5pvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7b787867f4-5hpzt_openstack-operators(44910e65-f73b-4454-bd9d-8fbbfb18445c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 11:43:51 crc kubenswrapper[4725]: I1002 11:43:51.494222 4725 scope.go:117] "RemoveContainer" containerID="6b182432556f31dd10a2334ed1273df0498ed74ba332e7e728c59ec5452fedcd" Oct 02 11:43:51 crc kubenswrapper[4725]: I1002 11:43:51.524314 4725 scope.go:117] "RemoveContainer" containerID="84f392ec1f639b5456e647cc352a7d3af2e9e5eccc4aae03ac0a0bad105ffb31" Oct 02 11:43:52 crc kubenswrapper[4725]: E1002 11:43:52.444341 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" podUID="44910e65-f73b-4454-bd9d-8fbbfb18445c" Oct 02 11:43:52 crc kubenswrapper[4725]: I1002 11:43:52.795620 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s" event={"ID":"57843ab0-f141-436e-847c-71f339bb736b","Type":"ContainerStarted","Data":"2389f87c6965195b3ac609bd92816d9feb854376ebe202ae0bf8894254c6716c"} Oct 02 11:43:52 crc kubenswrapper[4725]: I1002 11:43:52.818929 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d" event={"ID":"81a57946-838b-45e0-8a00-a7b50950db67","Type":"ContainerStarted","Data":"5892b97f2913453ed29828e8f9c320637b00a22898d3ce8b805f049fd1bb1c56"} Oct 02 11:43:52 crc kubenswrapper[4725]: I1002 11:43:52.868437 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8" event={"ID":"c4d00c80-69fb-4507-9e14-2a54cdb0b8c5","Type":"ContainerStarted","Data":"7bf09230982e99d03ca422380617792eb12d72979ac22ea04773a2507b4ddbe4"} Oct 02 11:43:52 crc kubenswrapper[4725]: I1002 11:43:52.967650 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p" event={"ID":"f4918ab0-3268-4081-bdf8-05df0b51e62b","Type":"ContainerStarted","Data":"9925a63fb7671b49444307c1a62c55a3c101eccceca7c7884ed0ba71b6e3d452"} Oct 02 11:43:52 crc kubenswrapper[4725]: I1002 11:43:52.987115 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb" event={"ID":"dfe403d1-c0bb-4570-8b27-714c65d930af","Type":"ContainerStarted","Data":"55ec14e714a178597c8cb22a66e41b1ada053c9dcc2665cd893406d497259b7f"} Oct 02 11:43:52 crc kubenswrapper[4725]: I1002 11:43:52.987161 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb" event={"ID":"dfe403d1-c0bb-4570-8b27-714c65d930af","Type":"ContainerStarted","Data":"bc511ad2e4f82c9faa7106f0d24280e530e3708dba0f541534a79c163eb9c628"} Oct 02 11:43:52 crc kubenswrapper[4725]: I1002 11:43:52.988816 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.003011 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" event={"ID":"44910e65-f73b-4454-bd9d-8fbbfb18445c","Type":"ContainerStarted","Data":"7c754fc0926e1e620a73ee35a2b59dab9d0e458e84e21164c467f075480ed206"} Oct 02 11:43:53 crc kubenswrapper[4725]: E1002 11:43:53.004957 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e1328760310f3bbf4548b8b1268cd711087dd91212b92bb0be287cad1f1b6fe9\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" podUID="44910e65-f73b-4454-bd9d-8fbbfb18445c" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.010782 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8" event={"ID":"7306cbd5-07f3-48a7-a865-752417bf2e8e","Type":"ContainerStarted","Data":"5757f151005bd09a1d29557a708545e7861f94f58690bd28cdadaad3cf1c7e2f"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.015381 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9" event={"ID":"9d557980-a1fc-4123-9a45-351264ad1fbc","Type":"ContainerStarted","Data":"5bc6f422ec25631b81d4e81bbc21678e428d3fc94053e5f0f1e3831f7fabfa17"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.015426 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9" event={"ID":"9d557980-a1fc-4123-9a45-351264ad1fbc","Type":"ContainerStarted","Data":"f7b8f55a99e8bdb99106d6cf5601267d4650b7a857c965a8bff030617b2f996f"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.016003 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.031245 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx" event={"ID":"023a7a0e-9279-4b9b-ba5d-6cd41b2aa729","Type":"ContainerStarted","Data":"ac22ff0b9ac3397bf8e83eb7ea0fa1b35e5079419ae75ad57273525ed11159a2"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.052355 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb" podStartSLOduration=3.344627613 podStartE2EDuration="16.052339239s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:38.788366561 +0000 UTC m=+938.695866024" lastFinishedPulling="2025-10-02 11:43:51.496078187 +0000 UTC m=+951.403577650" observedRunningTime="2025-10-02 11:43:53.022491136 +0000 UTC m=+952.929990609" watchObservedRunningTime="2025-10-02 11:43:53.052339239 +0000 UTC m=+952.959838692" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.053944 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn" event={"ID":"a51da7c1-9136-40c8-851a-f7c2d1f7a644","Type":"ContainerStarted","Data":"fe561ed242829c626a7c3cc53422228ef91486404412892d5c6ae5e90993bcce"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.053979 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn" event={"ID":"a51da7c1-9136-40c8-851a-f7c2d1f7a644","Type":"ContainerStarted","Data":"2ece940c71fe01c8b6146045ce5c3bb6f2e801260462754fb9752bcea7994e70"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.054088 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.054865 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9" podStartSLOduration=3.572585648 podStartE2EDuration="16.054856525s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.01763286 +0000 UTC m=+938.925132323" lastFinishedPulling="2025-10-02 11:43:51.499903737 +0000 UTC m=+951.407403200" observedRunningTime="2025-10-02 11:43:53.04743448 +0000 UTC m=+952.954933933" watchObservedRunningTime="2025-10-02 11:43:53.054856525 +0000 UTC m=+952.962355988" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.072704 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-d7btl" event={"ID":"dd3980d8-2ea7-4dd5-9604-9e09025e4220","Type":"ContainerStarted","Data":"f4d1f821a0778e5a7fd15a029b1959a15084a386c9fa27f0b92e0fd612df977f"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.088254 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" event={"ID":"caab214a-7c5d-4d45-bebe-680090c291d8","Type":"ContainerStarted","Data":"095aff8e23f20bb9f8e28ac926ca4ca0edf520aa7e4c69217503633368f42da2"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.088826 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.104051 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn" podStartSLOduration=3.373829318 podStartE2EDuration="16.104036784s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:38.766031815 +0000 UTC m=+938.673531278" lastFinishedPulling="2025-10-02 11:43:51.496239281 +0000 UTC m=+951.403738744" observedRunningTime="2025-10-02 11:43:53.102659668 +0000 UTC m=+953.010159131" watchObservedRunningTime="2025-10-02 11:43:53.104036784 +0000 UTC m=+953.011536247" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.120489 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx" event={"ID":"67d56c77-e0a6-4841-9c57-2afc39fcf9db","Type":"ContainerStarted","Data":"086782cfe762aaf6ba9975e48768250ce6e7151e8db0b1ae07b152e826cb0742"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.134404 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" podStartSLOduration=15.134390609 podStartE2EDuration="15.134390609s" podCreationTimestamp="2025-10-02 11:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:43:53.132830988 +0000 UTC m=+953.040330451" watchObservedRunningTime="2025-10-02 11:43:53.134390609 +0000 UTC m=+953.041890072" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.136163 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk" event={"ID":"165193eb-72d2-44c8-ad3c-12679db734a1","Type":"ContainerStarted","Data":"c8ed849f0a0aceee701789176f8b3fb8a777b65e6b2e1ba9bf81c894f37f5b07"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.136200 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk" event={"ID":"165193eb-72d2-44c8-ad3c-12679db734a1","Type":"ContainerStarted","Data":"bc6500d50b0afbb6c87c5cb0c43f50d32718855943ab9b0a658579da60cd8700"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.136935 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.158797 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz" event={"ID":"e66fa8da-eabe-4fe6-8689-961c09641552","Type":"ContainerStarted","Data":"3bcafa07a42b8ab31d4476b93f8f539cf38e60cd793f798a554975d3703ea1e6"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.161395 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk" podStartSLOduration=4.169982577 podStartE2EDuration="16.161379667s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.507134041 +0000 UTC m=+939.414633504" lastFinishedPulling="2025-10-02 11:43:51.498531131 +0000 UTC m=+951.406030594" observedRunningTime="2025-10-02 11:43:53.159071356 +0000 UTC m=+953.066570819" watchObservedRunningTime="2025-10-02 11:43:53.161379667 +0000 UTC m=+953.068879130" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.181165 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6" event={"ID":"2a1bf314-ad40-4055-8373-b05888c06791","Type":"ContainerStarted","Data":"34f6a024c7b8c602070862263e6647743e7773a05bf8920bdab20a47b8ec6a5b"} Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.181511 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6" Oct 02 11:43:53 crc kubenswrapper[4725]: I1002 11:43:53.208053 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6" podStartSLOduration=3.403397193 podStartE2EDuration="16.20803731s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:38.694976323 +0000 UTC m=+938.602475786" lastFinishedPulling="2025-10-02 11:43:51.49961643 +0000 UTC m=+951.407115903" observedRunningTime="2025-10-02 11:43:53.205933275 +0000 UTC m=+953.113432738" watchObservedRunningTime="2025-10-02 11:43:53.20803731 +0000 UTC m=+953.115536763" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.203952 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8" event={"ID":"7306cbd5-07f3-48a7-a865-752417bf2e8e","Type":"ContainerStarted","Data":"5a339a5411049743ee706ab77791d4165b66baa713fea05585d0e90692af6792"} Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.204412 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.206032 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-d7btl" event={"ID":"dd3980d8-2ea7-4dd5-9604-9e09025e4220","Type":"ContainerStarted","Data":"5e661aff62b90153cbbeceaf11dbd3a556620624598ce721c919baf68be649cd"} Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.206241 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-88c7-d7btl" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.208367 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s" event={"ID":"57843ab0-f141-436e-847c-71f339bb736b","Type":"ContainerStarted","Data":"89c91efed7104708186ca63a72a0818161a08ca5ad3e1f1ab1b463989378ee2e"} Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.208906 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.213211 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p" event={"ID":"f4918ab0-3268-4081-bdf8-05df0b51e62b","Type":"ContainerStarted","Data":"5fe77c1e75aa34cac861eb94c6c2b9f9eaca72b6fc6a4e73e5e8473ae4d1153a"} Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.213269 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.215200 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx" event={"ID":"67d56c77-e0a6-4841-9c57-2afc39fcf9db","Type":"ContainerStarted","Data":"b5aade343949ce762031ecb5b4121606743b020445340ab6a1918b53f67bbbe1"} Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.224028 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d" event={"ID":"81a57946-838b-45e0-8a00-a7b50950db67","Type":"ContainerStarted","Data":"a0b3c8a425e4b6d31f702f21da82a35b82609a004cf510699157b32725e59069"} Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.225869 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.228157 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx" event={"ID":"023a7a0e-9279-4b9b-ba5d-6cd41b2aa729","Type":"ContainerStarted","Data":"c9dd1580d909f13838e61dddc016a35b641af407ac92c5fa329e6532d684adfd"} Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.228682 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.231561 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6" event={"ID":"2a1bf314-ad40-4055-8373-b05888c06791","Type":"ContainerStarted","Data":"7e708e6874a28e5f0a85aacaeb6795326ee72336646019ccd5111257c9f9171f"} Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.231850 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8" podStartSLOduration=5.243312759 podStartE2EDuration="17.231815124s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.512144372 +0000 UTC m=+939.419643835" lastFinishedPulling="2025-10-02 11:43:51.500646737 +0000 UTC m=+951.408146200" observedRunningTime="2025-10-02 11:43:54.225968381 +0000 UTC m=+954.133467854" watchObservedRunningTime="2025-10-02 11:43:54.231815124 +0000 UTC m=+954.139314587" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.237796 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz" event={"ID":"e66fa8da-eabe-4fe6-8689-961c09641552","Type":"ContainerStarted","Data":"5f51376cc209b5d3f4a65df73aea65e1e9d58a8bf4a4ecffb0936cf16bc94c62"} Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.237893 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.237914 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.237928 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8" event={"ID":"c4d00c80-69fb-4507-9e14-2a54cdb0b8c5","Type":"ContainerStarted","Data":"3d535266eefbddfbfa47f3bff1879cd57773369cb2d6b34f86c3c43eda035bdd"} Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.245946 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" event={"ID":"caab214a-7c5d-4d45-bebe-680090c291d8","Type":"ContainerStarted","Data":"8db854da3790532f7950ed825967fb759bf60bb6d491938a5d0a97c23188f6ae"} Oct 02 11:43:54 crc kubenswrapper[4725]: E1002 11:43:54.248588 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e1328760310f3bbf4548b8b1268cd711087dd91212b92bb0be287cad1f1b6fe9\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" podUID="44910e65-f73b-4454-bd9d-8fbbfb18445c" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.251632 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx" podStartSLOduration=4.251842244 podStartE2EDuration="16.251598563s" podCreationTimestamp="2025-10-02 11:43:38 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.496370479 +0000 UTC m=+939.403869932" lastFinishedPulling="2025-10-02 11:43:51.496126788 +0000 UTC m=+951.403626251" observedRunningTime="2025-10-02 11:43:54.24044134 +0000 UTC m=+954.147940823" watchObservedRunningTime="2025-10-02 11:43:54.251598563 +0000 UTC m=+954.159098026" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.260799 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d" podStartSLOduration=5.1943365870000004 podStartE2EDuration="17.260774444s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.435140764 +0000 UTC m=+939.342640227" lastFinishedPulling="2025-10-02 11:43:51.501578611 +0000 UTC m=+951.409078084" observedRunningTime="2025-10-02 11:43:54.255222308 +0000 UTC m=+954.162721771" watchObservedRunningTime="2025-10-02 11:43:54.260774444 +0000 UTC m=+954.168273917" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.275030 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p" podStartSLOduration=5.223881091 podStartE2EDuration="17.275006727s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.449582753 +0000 UTC m=+939.357082216" lastFinishedPulling="2025-10-02 11:43:51.500708389 +0000 UTC m=+951.408207852" observedRunningTime="2025-10-02 11:43:54.269507762 +0000 UTC m=+954.177007225" watchObservedRunningTime="2025-10-02 11:43:54.275006727 +0000 UTC m=+954.182506190" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.291357 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s" podStartSLOduration=5.231086011 podStartE2EDuration="17.291335805s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.441579293 +0000 UTC m=+939.349078756" lastFinishedPulling="2025-10-02 11:43:51.501829087 +0000 UTC m=+951.409328550" observedRunningTime="2025-10-02 11:43:54.287247288 +0000 UTC m=+954.194746751" watchObservedRunningTime="2025-10-02 11:43:54.291335805 +0000 UTC m=+954.198835268" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.318468 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-88c7-d7btl" podStartSLOduration=5.315234275 podStartE2EDuration="17.318442015s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.501186885 +0000 UTC m=+939.408686348" lastFinishedPulling="2025-10-02 11:43:51.504394625 +0000 UTC m=+951.411894088" observedRunningTime="2025-10-02 11:43:54.30526608 +0000 UTC m=+954.212765573" watchObservedRunningTime="2025-10-02 11:43:54.318442015 +0000 UTC m=+954.225941478" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.324670 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz" podStartSLOduration=5.318388698 podStartE2EDuration="17.324647438s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.496956504 +0000 UTC m=+939.404455967" lastFinishedPulling="2025-10-02 11:43:51.503215244 +0000 UTC m=+951.410714707" observedRunningTime="2025-10-02 11:43:54.322243465 +0000 UTC m=+954.229742938" watchObservedRunningTime="2025-10-02 11:43:54.324647438 +0000 UTC m=+954.232146901" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.356469 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx" podStartSLOduration=4.865403434 podStartE2EDuration="17.356450571s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.008235854 +0000 UTC m=+938.915735317" lastFinishedPulling="2025-10-02 11:43:51.499282991 +0000 UTC m=+951.406782454" observedRunningTime="2025-10-02 11:43:54.356285977 +0000 UTC m=+954.263785450" watchObservedRunningTime="2025-10-02 11:43:54.356450571 +0000 UTC m=+954.263950034" Oct 02 11:43:54 crc kubenswrapper[4725]: I1002 11:43:54.359289 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8" podStartSLOduration=5.569873889 podStartE2EDuration="17.359274875s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.709387632 +0000 UTC m=+939.616887095" lastFinishedPulling="2025-10-02 11:43:51.498788618 +0000 UTC m=+951.406288081" observedRunningTime="2025-10-02 11:43:54.341284444 +0000 UTC m=+954.248783907" watchObservedRunningTime="2025-10-02 11:43:54.359274875 +0000 UTC m=+954.266774338" Oct 02 11:43:55 crc kubenswrapper[4725]: I1002 11:43:55.259769 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf" event={"ID":"53be820c-d953-4996-96da-4cec8d6b3bf0","Type":"ContainerStarted","Data":"c0280ae8fc8fd23205b4b8c202597e94609c7e0184810dcc3dff857fd0eae9bc"} Oct 02 11:43:55 crc kubenswrapper[4725]: I1002 11:43:55.261308 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx" Oct 02 11:43:56 crc kubenswrapper[4725]: I1002 11:43:56.276462 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" event={"ID":"2d4f9b95-e805-4def-bd1c-35b262ebd01f","Type":"ContainerStarted","Data":"08096eff93348fa926a70a3a8cd0534060b6bec35f03b87bf9e9886795f41bd0"} Oct 02 11:43:56 crc kubenswrapper[4725]: I1002 11:43:56.276882 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" Oct 02 11:43:56 crc kubenswrapper[4725]: I1002 11:43:56.299018 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" podStartSLOduration=2.955165065 podStartE2EDuration="19.298996279s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.721103999 +0000 UTC m=+939.628603462" lastFinishedPulling="2025-10-02 11:43:56.064935213 +0000 UTC m=+955.972434676" observedRunningTime="2025-10-02 11:43:56.298661349 +0000 UTC m=+956.206160812" watchObservedRunningTime="2025-10-02 11:43:56.298996279 +0000 UTC m=+956.206495752" Oct 02 11:43:56 crc kubenswrapper[4725]: I1002 11:43:56.304576 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jhggf" podStartSLOduration=3.2605960019999998 podStartE2EDuration="18.304561055s" podCreationTimestamp="2025-10-02 11:43:38 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.735134167 +0000 UTC m=+939.642633630" lastFinishedPulling="2025-10-02 11:43:54.77909922 +0000 UTC m=+954.686598683" observedRunningTime="2025-10-02 11:43:55.282832163 +0000 UTC m=+955.190331656" watchObservedRunningTime="2025-10-02 11:43:56.304561055 +0000 UTC m=+956.212060508" Oct 02 11:43:57 crc kubenswrapper[4725]: I1002 11:43:57.970652 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-n5pzn" Oct 02 11:43:57 crc kubenswrapper[4725]: I1002 11:43:57.971351 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6hxv6" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.071508 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-sr7pb" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.143394 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-bbfd9" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.165547 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-zcsfx" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.267904 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-gx29d" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.312315 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-nlk7p" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.342400 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rnbs8" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.342686 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-cv4rz" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.424047 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-hfz8s" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.469245 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-6r9zk" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.537592 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-25kj8" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.544655 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-88c7-d7btl" Oct 02 11:43:58 crc kubenswrapper[4725]: I1002 11:43:58.619232 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-fp4qx" Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.296503 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-8479857cf7-b2ttm" Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.319604 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" event={"ID":"827de292-bc8c-40da-be5f-443d06e48782","Type":"ContainerStarted","Data":"080fb4419f50d3080b42d4ea3e644ee95a18a419b2697bb0f7796d7ea3d30f89"} Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.320680 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.323258 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" event={"ID":"fb419c8a-047c-4df7-8120-25624030a3fe","Type":"ContainerStarted","Data":"5249d81ad1448a6daee08504f067892f1b609185f49f13129a44b65d05691c21"} Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.323820 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.325582 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" event={"ID":"3732c646-2b59-4238-8466-4c9240bc5b9a","Type":"ContainerStarted","Data":"556eb5a8a9cc18e058d43877a4faf7b19ae40eeaab46315b8eff4eba8209ceb8"} Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.326116 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.335157 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" event={"ID":"6c738c27-b7d2-4e56-b0e5-61c19a279278","Type":"ContainerStarted","Data":"63469e8df7eb15ab12bf30d6bbb168b6b4017d1bbc9e23eb5e5d3f6c19dd6b04"} Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.335783 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.339666 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" event={"ID":"d3b254cf-3771-426e-9211-9cd279379d73","Type":"ContainerStarted","Data":"3caea2a10a8e2d4b432b50fc91ca4a46c6f2f8567978d3dd8fb31cd458ee8c9d"} Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.340114 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.405337 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" podStartSLOduration=3.871138273 podStartE2EDuration="23.4053155s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.774641863 +0000 UTC m=+939.682141326" lastFinishedPulling="2025-10-02 11:43:59.30881909 +0000 UTC m=+959.216318553" observedRunningTime="2025-10-02 11:44:00.388821849 +0000 UTC m=+960.296321312" watchObservedRunningTime="2025-10-02 11:44:00.4053155 +0000 UTC m=+960.312814973" Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.428635 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" podStartSLOduration=3.840231394 podStartE2EDuration="23.428616681s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.71842651 +0000 UTC m=+939.625925973" lastFinishedPulling="2025-10-02 11:43:59.306811807 +0000 UTC m=+959.214311260" observedRunningTime="2025-10-02 11:44:00.409320905 +0000 UTC m=+960.316820368" watchObservedRunningTime="2025-10-02 11:44:00.428616681 +0000 UTC m=+960.336116144" Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.429146 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" podStartSLOduration=3.835666094 podStartE2EDuration="23.429140715s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.743528297 +0000 UTC m=+939.651027760" lastFinishedPulling="2025-10-02 11:43:59.337002918 +0000 UTC m=+959.244502381" observedRunningTime="2025-10-02 11:44:00.426200578 +0000 UTC m=+960.333700051" watchObservedRunningTime="2025-10-02 11:44:00.429140715 +0000 UTC m=+960.336640178" Oct 02 11:44:00 crc kubenswrapper[4725]: I1002 11:44:00.441293 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" podStartSLOduration=3.9849988979999997 podStartE2EDuration="23.441276453s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.740363814 +0000 UTC m=+939.647863277" lastFinishedPulling="2025-10-02 11:43:59.196641369 +0000 UTC m=+959.104140832" observedRunningTime="2025-10-02 11:44:00.438558812 +0000 UTC m=+960.346058275" watchObservedRunningTime="2025-10-02 11:44:00.441276453 +0000 UTC m=+960.348775926" Oct 02 11:44:08 crc kubenswrapper[4725]: I1002 11:44:08.303863 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-l56v9" Oct 02 11:44:08 crc kubenswrapper[4725]: I1002 11:44:08.326016 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" podStartSLOduration=10.740059047 podStartE2EDuration="30.325997832s" podCreationTimestamp="2025-10-02 11:43:38 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.739036459 +0000 UTC m=+939.646535922" lastFinishedPulling="2025-10-02 11:43:59.324975244 +0000 UTC m=+959.232474707" observedRunningTime="2025-10-02 11:44:00.455076855 +0000 UTC m=+960.362576308" watchObservedRunningTime="2025-10-02 11:44:08.325997832 +0000 UTC m=+968.233497305" Oct 02 11:44:08 crc kubenswrapper[4725]: I1002 11:44:08.407381 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" event={"ID":"44910e65-f73b-4454-bd9d-8fbbfb18445c","Type":"ContainerStarted","Data":"0fc9f2e784f2a9de910aa1a15417166487fee9e10d8cd84f956bc7909c36b40f"} Oct 02 11:44:08 crc kubenswrapper[4725]: I1002 11:44:08.407878 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" Oct 02 11:44:08 crc kubenswrapper[4725]: I1002 11:44:08.429800 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" podStartSLOduration=2.87943017 podStartE2EDuration="31.429776813s" podCreationTimestamp="2025-10-02 11:43:37 +0000 UTC" firstStartedPulling="2025-10-02 11:43:39.513667642 +0000 UTC m=+939.421167105" lastFinishedPulling="2025-10-02 11:44:08.064014285 +0000 UTC m=+967.971513748" observedRunningTime="2025-10-02 11:44:08.421478535 +0000 UTC m=+968.328978028" watchObservedRunningTime="2025-10-02 11:44:08.429776813 +0000 UTC m=+968.337276286" Oct 02 11:44:08 crc kubenswrapper[4725]: I1002 11:44:08.519413 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-pvtws" Oct 02 11:44:08 crc kubenswrapper[4725]: I1002 11:44:08.590368 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-fjs4g" Oct 02 11:44:08 crc kubenswrapper[4725]: I1002 11:44:08.602247 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-85777745bb-2mchd" Oct 02 11:44:08 crc kubenswrapper[4725]: I1002 11:44:08.701267 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-nzp5n" Oct 02 11:44:09 crc kubenswrapper[4725]: I1002 11:44:09.038884 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5869cb545-jwwlx" Oct 02 11:44:18 crc kubenswrapper[4725]: I1002 11:44:18.404429 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-5hpzt" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.659760 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lgxj6"] Oct 02 11:44:34 crc kubenswrapper[4725]: E1002 11:44:34.660472 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerName="extract-content" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.660484 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerName="extract-content" Oct 02 11:44:34 crc kubenswrapper[4725]: E1002 11:44:34.660515 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerName="extract-utilities" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.660521 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerName="extract-utilities" Oct 02 11:44:34 crc kubenswrapper[4725]: E1002 11:44:34.660538 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerName="registry-server" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.660545 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerName="registry-server" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.660666 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="079de0fb-5d10-4dae-8b54-ce1169f08123" containerName="registry-server" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.661432 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.672818 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.673300 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lgxj6"] Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.673454 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-v5skt" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.675851 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.676004 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.725231 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xgb58"] Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.726766 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.729244 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.737043 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xgb58"] Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.778737 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb7b016-f57f-4965-9208-0af3710f830a-config\") pod \"dnsmasq-dns-675f4bcbfc-lgxj6\" (UID: \"6fb7b016-f57f-4965-9208-0af3710f830a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.778830 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqm7l\" (UniqueName: \"kubernetes.io/projected/6fb7b016-f57f-4965-9208-0af3710f830a-kube-api-access-dqm7l\") pod \"dnsmasq-dns-675f4bcbfc-lgxj6\" (UID: \"6fb7b016-f57f-4965-9208-0af3710f830a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.880199 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xgb58\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.880569 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm8tv\" (UniqueName: \"kubernetes.io/projected/e3a4fd0a-82eb-4655-815c-021caf51352c-kube-api-access-qm8tv\") pod \"dnsmasq-dns-78dd6ddcc-xgb58\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.880910 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb7b016-f57f-4965-9208-0af3710f830a-config\") pod \"dnsmasq-dns-675f4bcbfc-lgxj6\" (UID: \"6fb7b016-f57f-4965-9208-0af3710f830a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.880956 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-config\") pod \"dnsmasq-dns-78dd6ddcc-xgb58\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.880995 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqm7l\" (UniqueName: \"kubernetes.io/projected/6fb7b016-f57f-4965-9208-0af3710f830a-kube-api-access-dqm7l\") pod \"dnsmasq-dns-675f4bcbfc-lgxj6\" (UID: \"6fb7b016-f57f-4965-9208-0af3710f830a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.881867 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb7b016-f57f-4965-9208-0af3710f830a-config\") pod \"dnsmasq-dns-675f4bcbfc-lgxj6\" (UID: \"6fb7b016-f57f-4965-9208-0af3710f830a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.903560 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqm7l\" (UniqueName: \"kubernetes.io/projected/6fb7b016-f57f-4965-9208-0af3710f830a-kube-api-access-dqm7l\") pod \"dnsmasq-dns-675f4bcbfc-lgxj6\" (UID: \"6fb7b016-f57f-4965-9208-0af3710f830a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.980090 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.981660 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm8tv\" (UniqueName: \"kubernetes.io/projected/e3a4fd0a-82eb-4655-815c-021caf51352c-kube-api-access-qm8tv\") pod \"dnsmasq-dns-78dd6ddcc-xgb58\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.981749 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-config\") pod \"dnsmasq-dns-78dd6ddcc-xgb58\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.981793 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xgb58\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.982484 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xgb58\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:34 crc kubenswrapper[4725]: I1002 11:44:34.983211 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-config\") pod \"dnsmasq-dns-78dd6ddcc-xgb58\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:35 crc kubenswrapper[4725]: I1002 11:44:35.007249 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm8tv\" (UniqueName: \"kubernetes.io/projected/e3a4fd0a-82eb-4655-815c-021caf51352c-kube-api-access-qm8tv\") pod \"dnsmasq-dns-78dd6ddcc-xgb58\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:35 crc kubenswrapper[4725]: I1002 11:44:35.046213 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:35 crc kubenswrapper[4725]: I1002 11:44:35.428401 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lgxj6"] Oct 02 11:44:35 crc kubenswrapper[4725]: I1002 11:44:35.527740 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xgb58"] Oct 02 11:44:35 crc kubenswrapper[4725]: W1002 11:44:35.533816 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3a4fd0a_82eb_4655_815c_021caf51352c.slice/crio-cc7445cc3cc023e7e0ca96773ee264e92bf222e021862e1d935d7cba7ce1ac35 WatchSource:0}: Error finding container cc7445cc3cc023e7e0ca96773ee264e92bf222e021862e1d935d7cba7ce1ac35: Status 404 returned error can't find the container with id cc7445cc3cc023e7e0ca96773ee264e92bf222e021862e1d935d7cba7ce1ac35 Oct 02 11:44:35 crc kubenswrapper[4725]: I1002 11:44:35.623649 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" event={"ID":"e3a4fd0a-82eb-4655-815c-021caf51352c","Type":"ContainerStarted","Data":"cc7445cc3cc023e7e0ca96773ee264e92bf222e021862e1d935d7cba7ce1ac35"} Oct 02 11:44:35 crc kubenswrapper[4725]: I1002 11:44:35.624594 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" event={"ID":"6fb7b016-f57f-4965-9208-0af3710f830a","Type":"ContainerStarted","Data":"fa13be361377906e60c048243e346aa18e38ba9d2458cd57b5293ae2f9936ded"} Oct 02 11:44:37 crc kubenswrapper[4725]: I1002 11:44:37.881125 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lgxj6"] Oct 02 11:44:37 crc kubenswrapper[4725]: I1002 11:44:37.912652 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qpmjt"] Oct 02 11:44:37 crc kubenswrapper[4725]: I1002 11:44:37.913934 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:37 crc kubenswrapper[4725]: I1002 11:44:37.921863 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qpmjt"] Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.033470 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-config\") pod \"dnsmasq-dns-5ccc8479f9-qpmjt\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.033847 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlppg\" (UniqueName: \"kubernetes.io/projected/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-kube-api-access-zlppg\") pod \"dnsmasq-dns-5ccc8479f9-qpmjt\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.033916 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qpmjt\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.135242 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-config\") pod \"dnsmasq-dns-5ccc8479f9-qpmjt\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.135300 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlppg\" (UniqueName: \"kubernetes.io/projected/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-kube-api-access-zlppg\") pod \"dnsmasq-dns-5ccc8479f9-qpmjt\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.135344 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qpmjt\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.136150 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qpmjt\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.136657 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-config\") pod \"dnsmasq-dns-5ccc8479f9-qpmjt\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.158700 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlppg\" (UniqueName: \"kubernetes.io/projected/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-kube-api-access-zlppg\") pod \"dnsmasq-dns-5ccc8479f9-qpmjt\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.178666 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xgb58"] Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.199404 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gtw6b"] Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.200855 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.215643 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gtw6b"] Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.239481 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.339161 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-config\") pod \"dnsmasq-dns-57d769cc4f-gtw6b\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.339210 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gtw6b\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.339312 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zft94\" (UniqueName: \"kubernetes.io/projected/942ac9b9-85b4-45a7-8048-2d97a3fdd353-kube-api-access-zft94\") pod \"dnsmasq-dns-57d769cc4f-gtw6b\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.440993 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-config\") pod \"dnsmasq-dns-57d769cc4f-gtw6b\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.442057 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gtw6b\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.442079 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-config\") pod \"dnsmasq-dns-57d769cc4f-gtw6b\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.442180 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zft94\" (UniqueName: \"kubernetes.io/projected/942ac9b9-85b4-45a7-8048-2d97a3fdd353-kube-api-access-zft94\") pod \"dnsmasq-dns-57d769cc4f-gtw6b\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.442219 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gtw6b\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.464538 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zft94\" (UniqueName: \"kubernetes.io/projected/942ac9b9-85b4-45a7-8048-2d97a3fdd353-kube-api-access-zft94\") pod \"dnsmasq-dns-57d769cc4f-gtw6b\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.519249 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:38 crc kubenswrapper[4725]: I1002 11:44:38.695451 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qpmjt"] Oct 02 11:44:38 crc kubenswrapper[4725]: W1002 11:44:38.701956 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b0cbc93_14ec_4a51_900e_a747f9e09bc8.slice/crio-c09920ec3a83ce34073263a41e97ad3a9df8900f0c53c80ab88541395bb7015e WatchSource:0}: Error finding container c09920ec3a83ce34073263a41e97ad3a9df8900f0c53c80ab88541395bb7015e: Status 404 returned error can't find the container with id c09920ec3a83ce34073263a41e97ad3a9df8900f0c53c80ab88541395bb7015e Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.048513 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.050252 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.052768 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.053007 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ptfjh" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.053171 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.057372 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.057608 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.057890 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.058590 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gtw6b"] Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.060002 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.062933 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 11:44:39 crc kubenswrapper[4725]: W1002 11:44:39.079594 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942ac9b9_85b4_45a7_8048_2d97a3fdd353.slice/crio-e469e2c718fb9357c4151b9c3075de5c169498082f0d89327c383656dc72bcb2 WatchSource:0}: Error finding container e469e2c718fb9357c4151b9c3075de5c169498082f0d89327c383656dc72bcb2: Status 404 returned error can't find the container with id e469e2c718fb9357c4151b9c3075de5c169498082f0d89327c383656dc72bcb2 Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.160541 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.160587 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.160612 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.160629 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.160651 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p7nl\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-kube-api-access-2p7nl\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.160672 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.160705 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.160743 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.160760 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.160791 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.160823 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.262066 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.262130 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.262152 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.262248 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.262284 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.262307 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.262336 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p7nl\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-kube-api-access-2p7nl\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.262359 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.262390 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.262413 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.262433 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.263474 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.263616 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.263773 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.263789 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.264299 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.264571 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.269529 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.271342 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.273860 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.274570 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.278929 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p7nl\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-kube-api-access-2p7nl\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.292339 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.325268 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.326818 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.329408 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.329952 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.330088 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.330247 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.330358 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.330511 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.332123 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l4nhp" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.332300 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.383896 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.464554 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.464608 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6e13124-0d0a-48a8-a1a7-0127e60454e1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.464633 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l5kz\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-kube-api-access-4l5kz\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.464655 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.464685 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.464873 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.465033 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.465070 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.465106 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-config-data\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.465145 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.465212 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6e13124-0d0a-48a8-a1a7-0127e60454e1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.566897 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.566942 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.566958 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-config-data\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.566981 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.567017 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6e13124-0d0a-48a8-a1a7-0127e60454e1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.567044 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.567072 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6e13124-0d0a-48a8-a1a7-0127e60454e1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.567088 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l5kz\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-kube-api-access-4l5kz\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.567110 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.567142 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.567178 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.567485 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.568984 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.569369 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.569413 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.569701 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-config-data\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.570919 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.574009 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.574582 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.576204 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6e13124-0d0a-48a8-a1a7-0127e60454e1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.584217 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6e13124-0d0a-48a8-a1a7-0127e60454e1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.586623 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l5kz\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-kube-api-access-4l5kz\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.589138 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.670297 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.691944 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" event={"ID":"8b0cbc93-14ec-4a51-900e-a747f9e09bc8","Type":"ContainerStarted","Data":"c09920ec3a83ce34073263a41e97ad3a9df8900f0c53c80ab88541395bb7015e"} Oct 02 11:44:39 crc kubenswrapper[4725]: I1002 11:44:39.717861 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" event={"ID":"942ac9b9-85b4-45a7-8048-2d97a3fdd353","Type":"ContainerStarted","Data":"e469e2c718fb9357c4151b9c3075de5c169498082f0d89327c383656dc72bcb2"} Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.737867 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.739422 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.741274 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.748771 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.748832 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-x4gpr" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.748887 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.749140 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.752005 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.755786 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.899648 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b64d7b7-832c-4a08-96e5-27fcd2c01988-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.899698 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw2hk\" (UniqueName: \"kubernetes.io/projected/9b64d7b7-832c-4a08-96e5-27fcd2c01988-kube-api-access-hw2hk\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.899721 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b64d7b7-832c-4a08-96e5-27fcd2c01988-kolla-config\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.899814 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b64d7b7-832c-4a08-96e5-27fcd2c01988-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.899935 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b64d7b7-832c-4a08-96e5-27fcd2c01988-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.900014 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b64d7b7-832c-4a08-96e5-27fcd2c01988-config-data-default\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.900100 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.900152 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b64d7b7-832c-4a08-96e5-27fcd2c01988-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:41 crc kubenswrapper[4725]: I1002 11:44:41.900208 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9b64d7b7-832c-4a08-96e5-27fcd2c01988-secrets\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.001810 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.001870 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b64d7b7-832c-4a08-96e5-27fcd2c01988-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.001898 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9b64d7b7-832c-4a08-96e5-27fcd2c01988-secrets\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.001942 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b64d7b7-832c-4a08-96e5-27fcd2c01988-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.001961 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw2hk\" (UniqueName: \"kubernetes.io/projected/9b64d7b7-832c-4a08-96e5-27fcd2c01988-kube-api-access-hw2hk\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.001978 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b64d7b7-832c-4a08-96e5-27fcd2c01988-kolla-config\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.002022 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b64d7b7-832c-4a08-96e5-27fcd2c01988-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.002042 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b64d7b7-832c-4a08-96e5-27fcd2c01988-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.002061 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b64d7b7-832c-4a08-96e5-27fcd2c01988-config-data-default\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.003033 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b64d7b7-832c-4a08-96e5-27fcd2c01988-config-data-default\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.004208 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b64d7b7-832c-4a08-96e5-27fcd2c01988-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.004356 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.004983 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b64d7b7-832c-4a08-96e5-27fcd2c01988-kolla-config\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.005022 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b64d7b7-832c-4a08-96e5-27fcd2c01988-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.009100 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9b64d7b7-832c-4a08-96e5-27fcd2c01988-secrets\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.013319 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b64d7b7-832c-4a08-96e5-27fcd2c01988-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.016192 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b64d7b7-832c-4a08-96e5-27fcd2c01988-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.024551 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw2hk\" (UniqueName: \"kubernetes.io/projected/9b64d7b7-832c-4a08-96e5-27fcd2c01988-kube-api-access-hw2hk\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.034898 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b64d7b7-832c-4a08-96e5-27fcd2c01988\") " pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.060926 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.079631 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.082626 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.091310 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.091493 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tzrf8" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.091335 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.091931 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.100888 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.206112 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9085121-a59b-4dbc-95fa-2a61f0432970-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.206200 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9085121-a59b-4dbc-95fa-2a61f0432970-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.206225 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9085121-a59b-4dbc-95fa-2a61f0432970-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.206252 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9085121-a59b-4dbc-95fa-2a61f0432970-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.206368 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4vgj\" (UniqueName: \"kubernetes.io/projected/d9085121-a59b-4dbc-95fa-2a61f0432970-kube-api-access-x4vgj\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.206479 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.206529 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d9085121-a59b-4dbc-95fa-2a61f0432970-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.206580 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9085121-a59b-4dbc-95fa-2a61f0432970-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.206612 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9085121-a59b-4dbc-95fa-2a61f0432970-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.307999 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9085121-a59b-4dbc-95fa-2a61f0432970-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.308435 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9085121-a59b-4dbc-95fa-2a61f0432970-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.308484 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9085121-a59b-4dbc-95fa-2a61f0432970-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.308551 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4vgj\" (UniqueName: \"kubernetes.io/projected/d9085121-a59b-4dbc-95fa-2a61f0432970-kube-api-access-x4vgj\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.308557 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9085121-a59b-4dbc-95fa-2a61f0432970-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.308571 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.308642 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d9085121-a59b-4dbc-95fa-2a61f0432970-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.308707 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9085121-a59b-4dbc-95fa-2a61f0432970-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.308761 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9085121-a59b-4dbc-95fa-2a61f0432970-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.308776 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.308861 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9085121-a59b-4dbc-95fa-2a61f0432970-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.308970 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9085121-a59b-4dbc-95fa-2a61f0432970-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.309764 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9085121-a59b-4dbc-95fa-2a61f0432970-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.310503 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9085121-a59b-4dbc-95fa-2a61f0432970-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.316240 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9085121-a59b-4dbc-95fa-2a61f0432970-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.317227 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d9085121-a59b-4dbc-95fa-2a61f0432970-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.318434 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9085121-a59b-4dbc-95fa-2a61f0432970-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.324382 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4vgj\" (UniqueName: \"kubernetes.io/projected/d9085121-a59b-4dbc-95fa-2a61f0432970-kube-api-access-x4vgj\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.330229 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9085121-a59b-4dbc-95fa-2a61f0432970\") " pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.427932 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.718212 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.725455 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.730639 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.731044 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-dt2l5" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.731209 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.744632 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.821487 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwb5l\" (UniqueName: \"kubernetes.io/projected/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-kube-api-access-kwb5l\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.821553 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-config-data\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.821600 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.821647 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-kolla-config\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.821675 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.923362 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-kolla-config\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.923431 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.923461 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwb5l\" (UniqueName: \"kubernetes.io/projected/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-kube-api-access-kwb5l\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.923486 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-config-data\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.923531 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.925554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-kolla-config\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.925998 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-config-data\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.929145 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.929600 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:42 crc kubenswrapper[4725]: I1002 11:44:42.943346 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwb5l\" (UniqueName: \"kubernetes.io/projected/3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4-kube-api-access-kwb5l\") pod \"memcached-0\" (UID: \"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4\") " pod="openstack/memcached-0" Oct 02 11:44:43 crc kubenswrapper[4725]: I1002 11:44:43.082209 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 02 11:44:44 crc kubenswrapper[4725]: I1002 11:44:44.310297 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 11:44:44 crc kubenswrapper[4725]: I1002 11:44:44.311483 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 11:44:44 crc kubenswrapper[4725]: I1002 11:44:44.314459 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-sshgk" Oct 02 11:44:44 crc kubenswrapper[4725]: I1002 11:44:44.322572 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 11:44:44 crc kubenswrapper[4725]: I1002 11:44:44.456638 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcpjb\" (UniqueName: \"kubernetes.io/projected/eb59bbc2-e952-462f-a94a-30eeae1b81cd-kube-api-access-tcpjb\") pod \"kube-state-metrics-0\" (UID: \"eb59bbc2-e952-462f-a94a-30eeae1b81cd\") " pod="openstack/kube-state-metrics-0" Oct 02 11:44:44 crc kubenswrapper[4725]: I1002 11:44:44.557904 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcpjb\" (UniqueName: \"kubernetes.io/projected/eb59bbc2-e952-462f-a94a-30eeae1b81cd-kube-api-access-tcpjb\") pod \"kube-state-metrics-0\" (UID: \"eb59bbc2-e952-462f-a94a-30eeae1b81cd\") " pod="openstack/kube-state-metrics-0" Oct 02 11:44:44 crc kubenswrapper[4725]: I1002 11:44:44.596216 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcpjb\" (UniqueName: \"kubernetes.io/projected/eb59bbc2-e952-462f-a94a-30eeae1b81cd-kube-api-access-tcpjb\") pod \"kube-state-metrics-0\" (UID: \"eb59bbc2-e952-462f-a94a-30eeae1b81cd\") " pod="openstack/kube-state-metrics-0" Oct 02 11:44:44 crc kubenswrapper[4725]: I1002 11:44:44.635248 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 11:44:44 crc kubenswrapper[4725]: I1002 11:44:44.978524 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:44:44 crc kubenswrapper[4725]: I1002 11:44:44.978597 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:44:46 crc kubenswrapper[4725]: I1002 11:44:46.433936 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.483391 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.486811 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.490551 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.490799 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.491051 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.492132 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-2th99" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.492340 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.496628 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.621267 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c790f7-722b-4693-a9cc-ba649c5833ca-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.621333 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48cv4\" (UniqueName: \"kubernetes.io/projected/a3c790f7-722b-4693-a9cc-ba649c5833ca-kube-api-access-48cv4\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.621358 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c790f7-722b-4693-a9cc-ba649c5833ca-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.621396 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3c790f7-722b-4693-a9cc-ba649c5833ca-config\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.621443 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a3c790f7-722b-4693-a9cc-ba649c5833ca-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.621467 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c790f7-722b-4693-a9cc-ba649c5833ca-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.621492 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3c790f7-722b-4693-a9cc-ba649c5833ca-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.621526 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.687123 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-98gqf"] Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.688176 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.690696 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.690929 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-86lg7" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.692436 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-b2r45"] Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.692607 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.694093 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.718548 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-98gqf"] Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.722599 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3c790f7-722b-4693-a9cc-ba649c5833ca-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.722643 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.722706 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c790f7-722b-4693-a9cc-ba649c5833ca-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.722787 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48cv4\" (UniqueName: \"kubernetes.io/projected/a3c790f7-722b-4693-a9cc-ba649c5833ca-kube-api-access-48cv4\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.722804 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c790f7-722b-4693-a9cc-ba649c5833ca-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.722828 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3c790f7-722b-4693-a9cc-ba649c5833ca-config\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.722865 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a3c790f7-722b-4693-a9cc-ba649c5833ca-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.722884 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c790f7-722b-4693-a9cc-ba649c5833ca-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.725064 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3c790f7-722b-4693-a9cc-ba649c5833ca-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.725238 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.727135 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b2r45"] Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.727867 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3c790f7-722b-4693-a9cc-ba649c5833ca-config\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.728164 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a3c790f7-722b-4693-a9cc-ba649c5833ca-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.730874 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c790f7-722b-4693-a9cc-ba649c5833ca-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.731554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c790f7-722b-4693-a9cc-ba649c5833ca-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.739598 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48cv4\" (UniqueName: \"kubernetes.io/projected/a3c790f7-722b-4693-a9cc-ba649c5833ca-kube-api-access-48cv4\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.748928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c790f7-722b-4693-a9cc-ba649c5833ca-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.761077 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a3c790f7-722b-4693-a9cc-ba649c5833ca\") " pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.813637 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824360 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrv9q\" (UniqueName: \"kubernetes.io/projected/ba80438e-e220-487f-b365-27a8224c7ef2-kube-api-access-xrv9q\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824477 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba80438e-e220-487f-b365-27a8224c7ef2-combined-ca-bundle\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824525 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-var-log\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824548 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba80438e-e220-487f-b365-27a8224c7ef2-var-run\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824575 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-etc-ovs\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824597 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-scripts\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824617 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ba80438e-e220-487f-b365-27a8224c7ef2-var-log-ovn\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824640 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-var-lib\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824662 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba80438e-e220-487f-b365-27a8224c7ef2-var-run-ovn\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824688 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-var-run\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824771 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hcdh\" (UniqueName: \"kubernetes.io/projected/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-kube-api-access-6hcdh\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824799 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba80438e-e220-487f-b365-27a8224c7ef2-scripts\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.824828 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba80438e-e220-487f-b365-27a8224c7ef2-ovn-controller-tls-certs\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926437 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrv9q\" (UniqueName: \"kubernetes.io/projected/ba80438e-e220-487f-b365-27a8224c7ef2-kube-api-access-xrv9q\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926510 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba80438e-e220-487f-b365-27a8224c7ef2-combined-ca-bundle\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926550 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-var-log\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926566 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba80438e-e220-487f-b365-27a8224c7ef2-var-run\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926584 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-etc-ovs\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ba80438e-e220-487f-b365-27a8224c7ef2-var-log-ovn\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926615 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-scripts\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926632 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-var-lib\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926648 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba80438e-e220-487f-b365-27a8224c7ef2-var-run-ovn\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926664 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-var-run\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926715 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hcdh\" (UniqueName: \"kubernetes.io/projected/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-kube-api-access-6hcdh\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926752 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba80438e-e220-487f-b365-27a8224c7ef2-scripts\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.926771 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba80438e-e220-487f-b365-27a8224c7ef2-ovn-controller-tls-certs\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.927804 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ba80438e-e220-487f-b365-27a8224c7ef2-var-log-ovn\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.927844 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-var-log\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.927953 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-var-lib\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.927999 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-etc-ovs\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.928026 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba80438e-e220-487f-b365-27a8224c7ef2-var-run\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.928070 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-var-run\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.928468 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba80438e-e220-487f-b365-27a8224c7ef2-var-run-ovn\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.929875 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-scripts\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.930026 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba80438e-e220-487f-b365-27a8224c7ef2-scripts\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.943497 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba80438e-e220-487f-b365-27a8224c7ef2-ovn-controller-tls-certs\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.943502 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba80438e-e220-487f-b365-27a8224c7ef2-combined-ca-bundle\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.945937 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrv9q\" (UniqueName: \"kubernetes.io/projected/ba80438e-e220-487f-b365-27a8224c7ef2-kube-api-access-xrv9q\") pod \"ovn-controller-98gqf\" (UID: \"ba80438e-e220-487f-b365-27a8224c7ef2\") " pod="openstack/ovn-controller-98gqf" Oct 02 11:44:48 crc kubenswrapper[4725]: I1002 11:44:48.947470 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hcdh\" (UniqueName: \"kubernetes.io/projected/f4f0e1eb-7dd0-4937-a789-b9edb4de3ade-kube-api-access-6hcdh\") pod \"ovn-controller-ovs-b2r45\" (UID: \"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade\") " pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:49 crc kubenswrapper[4725]: I1002 11:44:49.018291 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-98gqf" Oct 02 11:44:49 crc kubenswrapper[4725]: I1002 11:44:49.018505 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:44:50 crc kubenswrapper[4725]: I1002 11:44:50.930336 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.335510 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 11:44:51 crc kubenswrapper[4725]: W1002 11:44:51.779946 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e13124_0d0a_48a8_a1a7_0127e60454e1.slice/crio-7c70897cd583ed34a231b1d7f9f502761547dc042199e74479edfb67b6a3d68e WatchSource:0}: Error finding container 7c70897cd583ed34a231b1d7f9f502761547dc042199e74479edfb67b6a3d68e: Status 404 returned error can't find the container with id 7c70897cd583ed34a231b1d7f9f502761547dc042199e74479edfb67b6a3d68e Oct 02 11:44:51 crc kubenswrapper[4725]: E1002 11:44:51.808939 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 02 11:44:51 crc kubenswrapper[4725]: E1002 11:44:51.809373 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dqm7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-lgxj6_openstack(6fb7b016-f57f-4965-9208-0af3710f830a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 11:44:51 crc kubenswrapper[4725]: E1002 11:44:51.810894 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" podUID="6fb7b016-f57f-4965-9208-0af3710f830a" Oct 02 11:44:51 crc kubenswrapper[4725]: E1002 11:44:51.839395 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 02 11:44:51 crc kubenswrapper[4725]: E1002 11:44:51.839938 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm8tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-xgb58_openstack(e3a4fd0a-82eb-4655-815c-021caf51352c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 11:44:51 crc kubenswrapper[4725]: E1002 11:44:51.841071 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" podUID="e3a4fd0a-82eb-4655-815c-021caf51352c" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.843439 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dd00f5ac-14f8-47ba-8310-d6d279ffad6a","Type":"ContainerStarted","Data":"01b76edcfb5f78b25d06b3d9eb1967828b129b4b50b6198a094b8a3141f5d1b7"} Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.849980 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f6e13124-0d0a-48a8-a1a7-0127e60454e1","Type":"ContainerStarted","Data":"7c70897cd583ed34a231b1d7f9f502761547dc042199e74479edfb67b6a3d68e"} Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.894077 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.895945 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.913284 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-sh9dq" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.913877 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.914143 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.914360 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.925340 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.975483 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.975586 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.975610 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.975685 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.975758 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.975810 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.975828 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-config\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:51 crc kubenswrapper[4725]: I1002 11:44:51.976115 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4fgw\" (UniqueName: \"kubernetes.io/projected/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-kube-api-access-j4fgw\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.077245 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.077302 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.077355 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.077379 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.077406 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.077429 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-config\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.077450 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4fgw\" (UniqueName: \"kubernetes.io/projected/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-kube-api-access-j4fgw\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.077627 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.078582 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.078604 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.079534 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-config\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.080538 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.082051 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.082241 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.083669 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.097603 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4fgw\" (UniqueName: \"kubernetes.io/projected/f8dd7ed6-4794-4bf8-8d40-8bb837848eed-kube-api-access-j4fgw\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.117029 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f8dd7ed6-4794-4bf8-8d40-8bb837848eed\") " pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.261822 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.262647 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.386204 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" Oct 02 11:44:52 crc kubenswrapper[4725]: W1002 11:44:52.421178 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b64d7b7_832c_4a08_96e5_27fcd2c01988.slice/crio-58f888fcdbf39eb03e6c5445d3cb7c7efcabcebb290fcb712a2691dc06a81dbb WatchSource:0}: Error finding container 58f888fcdbf39eb03e6c5445d3cb7c7efcabcebb290fcb712a2691dc06a81dbb: Status 404 returned error can't find the container with id 58f888fcdbf39eb03e6c5445d3cb7c7efcabcebb290fcb712a2691dc06a81dbb Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.422966 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.485219 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqm7l\" (UniqueName: \"kubernetes.io/projected/6fb7b016-f57f-4965-9208-0af3710f830a-kube-api-access-dqm7l\") pod \"6fb7b016-f57f-4965-9208-0af3710f830a\" (UID: \"6fb7b016-f57f-4965-9208-0af3710f830a\") " Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.485601 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb7b016-f57f-4965-9208-0af3710f830a-config\") pod \"6fb7b016-f57f-4965-9208-0af3710f830a\" (UID: \"6fb7b016-f57f-4965-9208-0af3710f830a\") " Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.498338 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fb7b016-f57f-4965-9208-0af3710f830a-config" (OuterVolumeSpecName: "config") pod "6fb7b016-f57f-4965-9208-0af3710f830a" (UID: "6fb7b016-f57f-4965-9208-0af3710f830a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.523854 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb7b016-f57f-4965-9208-0af3710f830a-kube-api-access-dqm7l" (OuterVolumeSpecName: "kube-api-access-dqm7l") pod "6fb7b016-f57f-4965-9208-0af3710f830a" (UID: "6fb7b016-f57f-4965-9208-0af3710f830a"). InnerVolumeSpecName "kube-api-access-dqm7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.535069 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 02 11:44:52 crc kubenswrapper[4725]: W1002 11:44:52.555238 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3c790f7_722b_4693_a9cc_ba649c5833ca.slice/crio-08cf2283938c4758902491bb1f51ebcd52f4ef7bd2d60cf1b65906641596e49d WatchSource:0}: Error finding container 08cf2283938c4758902491bb1f51ebcd52f4ef7bd2d60cf1b65906641596e49d: Status 404 returned error can't find the container with id 08cf2283938c4758902491bb1f51ebcd52f4ef7bd2d60cf1b65906641596e49d Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.613218 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqm7l\" (UniqueName: \"kubernetes.io/projected/6fb7b016-f57f-4965-9208-0af3710f830a-kube-api-access-dqm7l\") on node \"crc\" DevicePath \"\"" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.613251 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb7b016-f57f-4965-9208-0af3710f830a-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.767530 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-98gqf"] Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.773556 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.783208 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.851229 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b2r45"] Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.858298 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b64d7b7-832c-4a08-96e5-27fcd2c01988","Type":"ContainerStarted","Data":"58f888fcdbf39eb03e6c5445d3cb7c7efcabcebb290fcb712a2691dc06a81dbb"} Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.863091 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a3c790f7-722b-4693-a9cc-ba649c5833ca","Type":"ContainerStarted","Data":"08cf2283938c4758902491bb1f51ebcd52f4ef7bd2d60cf1b65906641596e49d"} Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.865026 4725 generic.go:334] "Generic (PLEG): container finished" podID="8b0cbc93-14ec-4a51-900e-a747f9e09bc8" containerID="f49822396ef5aa6f64318ce016aa77635c8df45bfb4a78f85e00e967c0844dcb" exitCode=0 Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.865113 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" event={"ID":"8b0cbc93-14ec-4a51-900e-a747f9e09bc8","Type":"ContainerDied","Data":"f49822396ef5aa6f64318ce016aa77635c8df45bfb4a78f85e00e967c0844dcb"} Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.871035 4725 generic.go:334] "Generic (PLEG): container finished" podID="942ac9b9-85b4-45a7-8048-2d97a3fdd353" containerID="52fb34d835875159f4396400c295a5dfd4160189aa2d6c6abb6b05ba55304976" exitCode=0 Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.871075 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" event={"ID":"942ac9b9-85b4-45a7-8048-2d97a3fdd353","Type":"ContainerDied","Data":"52fb34d835875159f4396400c295a5dfd4160189aa2d6c6abb6b05ba55304976"} Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.874833 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4","Type":"ContainerStarted","Data":"6ed11f376e59e601570f3209fbc609f569eff835b4200eeb087b40e9d6ab0dbd"} Oct 02 11:44:52 crc kubenswrapper[4725]: W1002 11:44:52.879508 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4f0e1eb_7dd0_4937_a789_b9edb4de3ade.slice/crio-f81c44c9e788b3d603ca4ea33502bc8db12072c29061e6a96aaeb4032c6d7fee WatchSource:0}: Error finding container f81c44c9e788b3d603ca4ea33502bc8db12072c29061e6a96aaeb4032c6d7fee: Status 404 returned error can't find the container with id f81c44c9e788b3d603ca4ea33502bc8db12072c29061e6a96aaeb4032c6d7fee Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.880667 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eb59bbc2-e952-462f-a94a-30eeae1b81cd","Type":"ContainerStarted","Data":"24945d57acc4d732bca8fe15f3739ddc307cc6f1df87a6e2123c966bede890dd"} Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.886418 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-98gqf" event={"ID":"ba80438e-e220-487f-b365-27a8224c7ef2","Type":"ContainerStarted","Data":"d19229f9beef55efe93555d9b3573fc4610d3cc2d912d8d63d12fa34c185c31e"} Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.905382 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.905866 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lgxj6" event={"ID":"6fb7b016-f57f-4965-9208-0af3710f830a","Type":"ContainerDied","Data":"fa13be361377906e60c048243e346aa18e38ba9d2458cd57b5293ae2f9936ded"} Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.909023 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9085121-a59b-4dbc-95fa-2a61f0432970","Type":"ContainerStarted","Data":"27b518e23029990585f3792080d50b846002b5734f3cfbf64c0668ae78d6a054"} Oct 02 11:44:52 crc kubenswrapper[4725]: I1002 11:44:52.962408 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 02 11:44:52 crc kubenswrapper[4725]: W1002 11:44:52.970275 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8dd7ed6_4794_4bf8_8d40_8bb837848eed.slice/crio-a6900780a1efba0defccb48541d4fc3d9ae32ee5484c5bfa6f9b4961eb0e12d6 WatchSource:0}: Error finding container a6900780a1efba0defccb48541d4fc3d9ae32ee5484c5bfa6f9b4961eb0e12d6: Status 404 returned error can't find the container with id a6900780a1efba0defccb48541d4fc3d9ae32ee5484c5bfa6f9b4961eb0e12d6 Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.088370 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lgxj6"] Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.093696 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lgxj6"] Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.308268 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb7b016-f57f-4965-9208-0af3710f830a" path="/var/lib/kubelet/pods/6fb7b016-f57f-4965-9208-0af3710f830a/volumes" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.408670 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.541421 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm8tv\" (UniqueName: \"kubernetes.io/projected/e3a4fd0a-82eb-4655-815c-021caf51352c-kube-api-access-qm8tv\") pod \"e3a4fd0a-82eb-4655-815c-021caf51352c\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.541603 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-config\") pod \"e3a4fd0a-82eb-4655-815c-021caf51352c\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.541685 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-dns-svc\") pod \"e3a4fd0a-82eb-4655-815c-021caf51352c\" (UID: \"e3a4fd0a-82eb-4655-815c-021caf51352c\") " Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.542313 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-config" (OuterVolumeSpecName: "config") pod "e3a4fd0a-82eb-4655-815c-021caf51352c" (UID: "e3a4fd0a-82eb-4655-815c-021caf51352c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.542327 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e3a4fd0a-82eb-4655-815c-021caf51352c" (UID: "e3a4fd0a-82eb-4655-815c-021caf51352c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.547583 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a4fd0a-82eb-4655-815c-021caf51352c-kube-api-access-qm8tv" (OuterVolumeSpecName: "kube-api-access-qm8tv") pod "e3a4fd0a-82eb-4655-815c-021caf51352c" (UID: "e3a4fd0a-82eb-4655-815c-021caf51352c"). InnerVolumeSpecName "kube-api-access-qm8tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.643685 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.643758 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm8tv\" (UniqueName: \"kubernetes.io/projected/e3a4fd0a-82eb-4655-815c-021caf51352c-kube-api-access-qm8tv\") on node \"crc\" DevicePath \"\"" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.643770 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3a4fd0a-82eb-4655-815c-021caf51352c-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.921677 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" event={"ID":"8b0cbc93-14ec-4a51-900e-a747f9e09bc8","Type":"ContainerStarted","Data":"6acb7116c5b6819ca12a02b2fd304a4b9b52a92b205539f0545a60a485d011b9"} Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.921762 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.929009 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f8dd7ed6-4794-4bf8-8d40-8bb837848eed","Type":"ContainerStarted","Data":"a6900780a1efba0defccb48541d4fc3d9ae32ee5484c5bfa6f9b4961eb0e12d6"} Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.930455 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b2r45" event={"ID":"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade","Type":"ContainerStarted","Data":"f81c44c9e788b3d603ca4ea33502bc8db12072c29061e6a96aaeb4032c6d7fee"} Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.938705 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" event={"ID":"942ac9b9-85b4-45a7-8048-2d97a3fdd353","Type":"ContainerStarted","Data":"612576d0270ba76b60e7fd11eb469a5f476f5ca2f541acf7f6cd882673577519"} Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.939836 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.942429 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" event={"ID":"e3a4fd0a-82eb-4655-815c-021caf51352c","Type":"ContainerDied","Data":"cc7445cc3cc023e7e0ca96773ee264e92bf222e021862e1d935d7cba7ce1ac35"} Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.942496 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xgb58" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.951260 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" podStartSLOduration=3.667022738 podStartE2EDuration="16.95123862s" podCreationTimestamp="2025-10-02 11:44:37 +0000 UTC" firstStartedPulling="2025-10-02 11:44:38.705069846 +0000 UTC m=+998.612569309" lastFinishedPulling="2025-10-02 11:44:51.989285718 +0000 UTC m=+1011.896785191" observedRunningTime="2025-10-02 11:44:53.938706662 +0000 UTC m=+1013.846206125" watchObservedRunningTime="2025-10-02 11:44:53.95123862 +0000 UTC m=+1013.858738073" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.957805 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" podStartSLOduration=3.049107776 podStartE2EDuration="15.957788291s" podCreationTimestamp="2025-10-02 11:44:38 +0000 UTC" firstStartedPulling="2025-10-02 11:44:39.087975837 +0000 UTC m=+998.995475300" lastFinishedPulling="2025-10-02 11:44:51.996656352 +0000 UTC m=+1011.904155815" observedRunningTime="2025-10-02 11:44:53.953844658 +0000 UTC m=+1013.861344131" watchObservedRunningTime="2025-10-02 11:44:53.957788291 +0000 UTC m=+1013.865287754" Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.993012 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xgb58"] Oct 02 11:44:53 crc kubenswrapper[4725]: I1002 11:44:53.998436 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xgb58"] Oct 02 11:44:55 crc kubenswrapper[4725]: I1002 11:44:55.278529 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a4fd0a-82eb-4655-815c-021caf51352c" path="/var/lib/kubelet/pods/e3a4fd0a-82eb-4655-815c-021caf51352c/volumes" Oct 02 11:44:58 crc kubenswrapper[4725]: I1002 11:44:58.241025 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:44:58 crc kubenswrapper[4725]: I1002 11:44:58.520742 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:44:58 crc kubenswrapper[4725]: I1002 11:44:58.572305 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qpmjt"] Oct 02 11:44:58 crc kubenswrapper[4725]: I1002 11:44:58.976340 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" podUID="8b0cbc93-14ec-4a51-900e-a747f9e09bc8" containerName="dnsmasq-dns" containerID="cri-o://6acb7116c5b6819ca12a02b2fd304a4b9b52a92b205539f0545a60a485d011b9" gracePeriod=10 Oct 02 11:44:59 crc kubenswrapper[4725]: I1002 11:44:59.984085 4725 generic.go:334] "Generic (PLEG): container finished" podID="8b0cbc93-14ec-4a51-900e-a747f9e09bc8" containerID="6acb7116c5b6819ca12a02b2fd304a4b9b52a92b205539f0545a60a485d011b9" exitCode=0 Oct 02 11:44:59 crc kubenswrapper[4725]: I1002 11:44:59.984156 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" event={"ID":"8b0cbc93-14ec-4a51-900e-a747f9e09bc8","Type":"ContainerDied","Data":"6acb7116c5b6819ca12a02b2fd304a4b9b52a92b205539f0545a60a485d011b9"} Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.085784 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.154944 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp"] Oct 02 11:45:00 crc kubenswrapper[4725]: E1002 11:45:00.155257 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0cbc93-14ec-4a51-900e-a747f9e09bc8" containerName="dnsmasq-dns" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.155274 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0cbc93-14ec-4a51-900e-a747f9e09bc8" containerName="dnsmasq-dns" Oct 02 11:45:00 crc kubenswrapper[4725]: E1002 11:45:00.155311 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0cbc93-14ec-4a51-900e-a747f9e09bc8" containerName="init" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.155318 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0cbc93-14ec-4a51-900e-a747f9e09bc8" containerName="init" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.155485 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0cbc93-14ec-4a51-900e-a747f9e09bc8" containerName="dnsmasq-dns" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.156026 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.158677 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.158697 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.161029 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp"] Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.173417 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlppg\" (UniqueName: \"kubernetes.io/projected/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-kube-api-access-zlppg\") pod \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.173609 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-dns-svc\") pod \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.173653 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-config\") pod \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\" (UID: \"8b0cbc93-14ec-4a51-900e-a747f9e09bc8\") " Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.177220 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-kube-api-access-zlppg" (OuterVolumeSpecName: "kube-api-access-zlppg") pod "8b0cbc93-14ec-4a51-900e-a747f9e09bc8" (UID: "8b0cbc93-14ec-4a51-900e-a747f9e09bc8"). InnerVolumeSpecName "kube-api-access-zlppg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.220640 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-config" (OuterVolumeSpecName: "config") pod "8b0cbc93-14ec-4a51-900e-a747f9e09bc8" (UID: "8b0cbc93-14ec-4a51-900e-a747f9e09bc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.235923 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b0cbc93-14ec-4a51-900e-a747f9e09bc8" (UID: "8b0cbc93-14ec-4a51-900e-a747f9e09bc8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.275015 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkqds\" (UniqueName: \"kubernetes.io/projected/710c6864-7bd0-41ca-b599-3234b6cea3b4-kube-api-access-rkqds\") pod \"collect-profiles-29323425-4dnjp\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.275123 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/710c6864-7bd0-41ca-b599-3234b6cea3b4-config-volume\") pod \"collect-profiles-29323425-4dnjp\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.275151 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/710c6864-7bd0-41ca-b599-3234b6cea3b4-secret-volume\") pod \"collect-profiles-29323425-4dnjp\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.275298 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.275314 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlppg\" (UniqueName: \"kubernetes.io/projected/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-kube-api-access-zlppg\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.275330 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b0cbc93-14ec-4a51-900e-a747f9e09bc8-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.376598 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkqds\" (UniqueName: \"kubernetes.io/projected/710c6864-7bd0-41ca-b599-3234b6cea3b4-kube-api-access-rkqds\") pod \"collect-profiles-29323425-4dnjp\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.376713 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/710c6864-7bd0-41ca-b599-3234b6cea3b4-config-volume\") pod \"collect-profiles-29323425-4dnjp\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.376759 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/710c6864-7bd0-41ca-b599-3234b6cea3b4-secret-volume\") pod \"collect-profiles-29323425-4dnjp\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.378089 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/710c6864-7bd0-41ca-b599-3234b6cea3b4-config-volume\") pod \"collect-profiles-29323425-4dnjp\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.379971 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/710c6864-7bd0-41ca-b599-3234b6cea3b4-secret-volume\") pod \"collect-profiles-29323425-4dnjp\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.392545 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkqds\" (UniqueName: \"kubernetes.io/projected/710c6864-7bd0-41ca-b599-3234b6cea3b4-kube-api-access-rkqds\") pod \"collect-profiles-29323425-4dnjp\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.478135 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.996857 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" event={"ID":"8b0cbc93-14ec-4a51-900e-a747f9e09bc8","Type":"ContainerDied","Data":"c09920ec3a83ce34073263a41e97ad3a9df8900f0c53c80ab88541395bb7015e"} Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.996904 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qpmjt" Oct 02 11:45:00 crc kubenswrapper[4725]: I1002 11:45:00.996917 4725 scope.go:117] "RemoveContainer" containerID="6acb7116c5b6819ca12a02b2fd304a4b9b52a92b205539f0545a60a485d011b9" Oct 02 11:45:01 crc kubenswrapper[4725]: I1002 11:45:01.029032 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qpmjt"] Oct 02 11:45:01 crc kubenswrapper[4725]: I1002 11:45:01.033657 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qpmjt"] Oct 02 11:45:01 crc kubenswrapper[4725]: I1002 11:45:01.227560 4725 scope.go:117] "RemoveContainer" containerID="f49822396ef5aa6f64318ce016aa77635c8df45bfb4a78f85e00e967c0844dcb" Oct 02 11:45:01 crc kubenswrapper[4725]: I1002 11:45:01.312367 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b0cbc93-14ec-4a51-900e-a747f9e09bc8" path="/var/lib/kubelet/pods/8b0cbc93-14ec-4a51-900e-a747f9e09bc8/volumes" Oct 02 11:45:01 crc kubenswrapper[4725]: I1002 11:45:01.330168 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp"] Oct 02 11:45:02 crc kubenswrapper[4725]: I1002 11:45:02.009931 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" event={"ID":"710c6864-7bd0-41ca-b599-3234b6cea3b4","Type":"ContainerStarted","Data":"899e29536f3a832d87145b0fc437b57b1802736241117537c3c897c6f5186df8"} Oct 02 11:45:02 crc kubenswrapper[4725]: I1002 11:45:02.014586 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9085121-a59b-4dbc-95fa-2a61f0432970","Type":"ContainerStarted","Data":"d7d17f13bf09b98749ad7a4e37b065cb5f948a8c139d5ddfe6081a1b7194764c"} Oct 02 11:45:02 crc kubenswrapper[4725]: I1002 11:45:02.019662 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4","Type":"ContainerStarted","Data":"68a24f99d3d36e89bf97fd258ff061421dc6989a3e811d9d370149ee51a7b6a9"} Oct 02 11:45:02 crc kubenswrapper[4725]: I1002 11:45:02.020306 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 02 11:45:02 crc kubenswrapper[4725]: I1002 11:45:02.021553 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b2r45" event={"ID":"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade","Type":"ContainerStarted","Data":"875ff67aa744daa788833ced6cde87bbf094426e489ae93aaccdde264642976d"} Oct 02 11:45:02 crc kubenswrapper[4725]: I1002 11:45:02.024322 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b64d7b7-832c-4a08-96e5-27fcd2c01988","Type":"ContainerStarted","Data":"25046225d66dfcb463802a8a2256826dabed0379c8a00c8e81dbe509677c288a"} Oct 02 11:45:02 crc kubenswrapper[4725]: I1002 11:45:02.066294 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.794961075 podStartE2EDuration="20.066273233s" podCreationTimestamp="2025-10-02 11:44:42 +0000 UTC" firstStartedPulling="2025-10-02 11:44:52.293046624 +0000 UTC m=+1012.200546087" lastFinishedPulling="2025-10-02 11:44:59.564358782 +0000 UTC m=+1019.471858245" observedRunningTime="2025-10-02 11:45:02.057116243 +0000 UTC m=+1021.964615726" watchObservedRunningTime="2025-10-02 11:45:02.066273233 +0000 UTC m=+1021.973772716" Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.034179 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4f0e1eb-7dd0-4937-a789-b9edb4de3ade" containerID="875ff67aa744daa788833ced6cde87bbf094426e489ae93aaccdde264642976d" exitCode=0 Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.034399 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b2r45" event={"ID":"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade","Type":"ContainerDied","Data":"875ff67aa744daa788833ced6cde87bbf094426e489ae93aaccdde264642976d"} Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.035998 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-98gqf" event={"ID":"ba80438e-e220-487f-b365-27a8224c7ef2","Type":"ContainerStarted","Data":"e74575deb25e039745e5fba51d2c26551a6c165a3bbb733adeebfb89d99017fe"} Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.036495 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-98gqf" Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.040408 4725 generic.go:334] "Generic (PLEG): container finished" podID="710c6864-7bd0-41ca-b599-3234b6cea3b4" containerID="349354862b945d2787c003d2aab7d098babc173adeac549e737c3d109b18a099" exitCode=0 Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.040494 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" event={"ID":"710c6864-7bd0-41ca-b599-3234b6cea3b4","Type":"ContainerDied","Data":"349354862b945d2787c003d2aab7d098babc173adeac549e737c3d109b18a099"} Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.042464 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a3c790f7-722b-4693-a9cc-ba649c5833ca","Type":"ContainerStarted","Data":"1472418226b82b587647fe25d625f70d70f3761a28abb8aca75a95dc643c1c64"} Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.043915 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f6e13124-0d0a-48a8-a1a7-0127e60454e1","Type":"ContainerStarted","Data":"bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d"} Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.045305 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dd00f5ac-14f8-47ba-8310-d6d279ffad6a","Type":"ContainerStarted","Data":"f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505"} Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.046398 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eb59bbc2-e952-462f-a94a-30eeae1b81cd","Type":"ContainerStarted","Data":"4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492"} Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.046500 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.053236 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f8dd7ed6-4794-4bf8-8d40-8bb837848eed","Type":"ContainerStarted","Data":"d192ab7bee57b8764f78292aff55780a6e91f41440cd2d3984bd7e122e336b2b"} Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.090421 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.64560626 podStartE2EDuration="19.090406651s" podCreationTimestamp="2025-10-02 11:44:44 +0000 UTC" firstStartedPulling="2025-10-02 11:44:52.792628966 +0000 UTC m=+1012.700128439" lastFinishedPulling="2025-10-02 11:45:01.237429347 +0000 UTC m=+1021.144928830" observedRunningTime="2025-10-02 11:45:03.078896109 +0000 UTC m=+1022.986395562" watchObservedRunningTime="2025-10-02 11:45:03.090406651 +0000 UTC m=+1022.997906114" Oct 02 11:45:03 crc kubenswrapper[4725]: I1002 11:45:03.121113 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-98gqf" podStartSLOduration=7.866475006 podStartE2EDuration="15.121095046s" podCreationTimestamp="2025-10-02 11:44:48 +0000 UTC" firstStartedPulling="2025-10-02 11:44:52.780347353 +0000 UTC m=+1012.687846816" lastFinishedPulling="2025-10-02 11:45:00.034967393 +0000 UTC m=+1019.942466856" observedRunningTime="2025-10-02 11:45:03.118122448 +0000 UTC m=+1023.025621921" watchObservedRunningTime="2025-10-02 11:45:03.121095046 +0000 UTC m=+1023.028594509" Oct 02 11:45:04 crc kubenswrapper[4725]: I1002 11:45:04.062845 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b2r45" event={"ID":"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade","Type":"ContainerStarted","Data":"e096a7dbdb6003ae3dd99441b532fc1e226dac5d4dd2b13719f9c96ec47ee4fa"} Oct 02 11:45:04 crc kubenswrapper[4725]: I1002 11:45:04.063322 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b2r45" event={"ID":"f4f0e1eb-7dd0-4937-a789-b9edb4de3ade","Type":"ContainerStarted","Data":"20583111ea97537304a564d26b3f9af912d52b07065722cdc6b79ad28da66b73"} Oct 02 11:45:04 crc kubenswrapper[4725]: I1002 11:45:04.090314 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-b2r45" podStartSLOduration=8.984000284 podStartE2EDuration="16.090297643s" podCreationTimestamp="2025-10-02 11:44:48 +0000 UTC" firstStartedPulling="2025-10-02 11:44:52.905307601 +0000 UTC m=+1012.812807064" lastFinishedPulling="2025-10-02 11:45:00.01160496 +0000 UTC m=+1019.919104423" observedRunningTime="2025-10-02 11:45:04.087073118 +0000 UTC m=+1023.994572591" watchObservedRunningTime="2025-10-02 11:45:04.090297643 +0000 UTC m=+1023.997797106" Oct 02 11:45:05 crc kubenswrapper[4725]: I1002 11:45:05.072771 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:45:05 crc kubenswrapper[4725]: I1002 11:45:05.073387 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:45:07 crc kubenswrapper[4725]: I1002 11:45:07.090213 4725 generic.go:334] "Generic (PLEG): container finished" podID="d9085121-a59b-4dbc-95fa-2a61f0432970" containerID="d7d17f13bf09b98749ad7a4e37b065cb5f948a8c139d5ddfe6081a1b7194764c" exitCode=0 Oct 02 11:45:07 crc kubenswrapper[4725]: I1002 11:45:07.090293 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9085121-a59b-4dbc-95fa-2a61f0432970","Type":"ContainerDied","Data":"d7d17f13bf09b98749ad7a4e37b065cb5f948a8c139d5ddfe6081a1b7194764c"} Oct 02 11:45:07 crc kubenswrapper[4725]: I1002 11:45:07.092035 4725 generic.go:334] "Generic (PLEG): container finished" podID="9b64d7b7-832c-4a08-96e5-27fcd2c01988" containerID="25046225d66dfcb463802a8a2256826dabed0379c8a00c8e81dbe509677c288a" exitCode=0 Oct 02 11:45:07 crc kubenswrapper[4725]: I1002 11:45:07.092073 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b64d7b7-832c-4a08-96e5-27fcd2c01988","Type":"ContainerDied","Data":"25046225d66dfcb463802a8a2256826dabed0379c8a00c8e81dbe509677c288a"} Oct 02 11:45:08 crc kubenswrapper[4725]: I1002 11:45:08.084011 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 02 11:45:11 crc kubenswrapper[4725]: I1002 11:45:11.990360 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2mzwk"] Oct 02 11:45:11 crc kubenswrapper[4725]: I1002 11:45:11.992294 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:11.999856 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.000049 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2mzwk"] Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.081277 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.081336 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-ovs-rundir\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.081500 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-combined-ca-bundle\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.081576 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-ovn-rundir\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.081627 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmlg4\" (UniqueName: \"kubernetes.io/projected/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-kube-api-access-rmlg4\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.081671 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-config\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.097674 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l7px"] Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.098884 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.100683 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.118208 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l7px"] Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.183472 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.183523 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-ovn-rundir\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.183553 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42fnc\" (UniqueName: \"kubernetes.io/projected/5cb7de96-e482-43c5-ba1f-7d2532e1d516-kube-api-access-42fnc\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.183577 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmlg4\" (UniqueName: \"kubernetes.io/projected/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-kube-api-access-rmlg4\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.183753 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-config\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.183882 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.183907 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-ovn-rundir\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.184029 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-config\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.184082 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.184110 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-ovs-rundir\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.184171 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-ovs-rundir\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.184244 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-combined-ca-bundle\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.184645 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-config\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.189832 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-combined-ca-bundle\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.191204 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.199080 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmlg4\" (UniqueName: \"kubernetes.io/projected/d765fdd7-c196-4fdc-b5ae-813c10a8bd2b-kube-api-access-rmlg4\") pod \"ovn-controller-metrics-2mzwk\" (UID: \"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b\") " pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.286550 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.286607 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42fnc\" (UniqueName: \"kubernetes.io/projected/5cb7de96-e482-43c5-ba1f-7d2532e1d516-kube-api-access-42fnc\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.286666 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.286703 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-config\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.287525 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-config\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.288096 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.288513 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.312389 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42fnc\" (UniqueName: \"kubernetes.io/projected/5cb7de96-e482-43c5-ba1f-7d2532e1d516-kube-api-access-42fnc\") pod \"dnsmasq-dns-7fd796d7df-8l7px\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.320888 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2mzwk" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.394593 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l7px"] Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.395479 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.436461 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fb8h2"] Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.438057 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.440256 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.453562 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fb8h2"] Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.490691 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.490780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.490874 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt25t\" (UniqueName: \"kubernetes.io/projected/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-kube-api-access-xt25t\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.490923 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.490962 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-config\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.592276 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.592328 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.592405 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt25t\" (UniqueName: \"kubernetes.io/projected/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-kube-api-access-xt25t\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.592450 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.592483 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-config\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.593420 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-config\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.593957 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.594095 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.594282 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.611567 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt25t\" (UniqueName: \"kubernetes.io/projected/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-kube-api-access-xt25t\") pod \"dnsmasq-dns-86db49b7ff-fb8h2\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:12 crc kubenswrapper[4725]: I1002 11:45:12.752225 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.679468 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.706834 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fb8h2"] Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.759865 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-2n9b7"] Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.761703 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.780103 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2n9b7"] Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.826771 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-config\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.826815 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.826849 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-dns-svc\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.826913 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gch27\" (UniqueName: \"kubernetes.io/projected/ce83dde7-78f3-4d87-8656-61dd112db89e-kube-api-access-gch27\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.827004 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.928035 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-config\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.928100 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.928138 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-dns-svc\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.928179 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gch27\" (UniqueName: \"kubernetes.io/projected/ce83dde7-78f3-4d87-8656-61dd112db89e-kube-api-access-gch27\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.928261 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.929017 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-config\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.929130 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.929260 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-dns-svc\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.929277 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.944098 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gch27\" (UniqueName: \"kubernetes.io/projected/ce83dde7-78f3-4d87-8656-61dd112db89e-kube-api-access-gch27\") pod \"dnsmasq-dns-698758b865-2n9b7\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.978005 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:45:14 crc kubenswrapper[4725]: I1002 11:45:14.978058 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.109925 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.801768 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.808627 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.810876 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.811022 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.811558 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.812659 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-j6gxt" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.828047 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.841505 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-cache\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.841543 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.841568 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-lock\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.841631 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.842069 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnbc\" (UniqueName: \"kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-kube-api-access-mxnbc\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.950639 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.950922 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnbc\" (UniqueName: \"kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-kube-api-access-mxnbc\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.951089 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-cache\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.951113 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.951138 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-lock\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.951652 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-lock\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: E1002 11:45:15.952329 4725 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 02 11:45:15 crc kubenswrapper[4725]: E1002 11:45:15.952358 4725 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.952375 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-cache\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: E1002 11:45:15.952423 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift podName:e1fb73ad-22b0-46f9-a5c0-9faba5acb82d nodeName:}" failed. No retries permitted until 2025-10-02 11:45:16.452402355 +0000 UTC m=+1036.359901908 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift") pod "swift-storage-0" (UID: "e1fb73ad-22b0-46f9-a5c0-9faba5acb82d") : configmap "swift-ring-files" not found Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.952493 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.970684 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnbc\" (UniqueName: \"kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-kube-api-access-mxnbc\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:15 crc kubenswrapper[4725]: I1002 11:45:15.972286 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.312770 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7mvhs"] Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.314361 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.316569 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.316981 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.317147 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.333069 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7mvhs"] Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.342111 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7mvhs"] Oct 02 11:45:16 crc kubenswrapper[4725]: E1002 11:45:16.342452 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-mpbfb ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-7mvhs" podUID="fa2c5f69-1e44-4409-b781-316b14a28986" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.348934 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zsfz8"] Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.350955 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.356566 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-ring-data-devices\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.356758 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-swiftconf\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.356849 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-dispersionconf\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.356915 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpbfb\" (UniqueName: \"kubernetes.io/projected/fa2c5f69-1e44-4409-b781-316b14a28986-kube-api-access-mpbfb\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.356993 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-combined-ca-bundle\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.357075 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa2c5f69-1e44-4409-b781-316b14a28986-etc-swift\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.357129 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-scripts\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.357412 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zsfz8"] Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458431 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-dispersionconf\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458492 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-swiftconf\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458516 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-swiftconf\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458535 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/66f1562e-003f-4f29-a7ba-2c42b823662e-etc-swift\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458556 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqk2f\" (UniqueName: \"kubernetes.io/projected/66f1562e-003f-4f29-a7ba-2c42b823662e-kube-api-access-zqk2f\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458588 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-dispersionconf\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458612 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpbfb\" (UniqueName: \"kubernetes.io/projected/fa2c5f69-1e44-4409-b781-316b14a28986-kube-api-access-mpbfb\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458634 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458652 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-scripts\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458675 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-combined-ca-bundle\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458703 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-combined-ca-bundle\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458740 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa2c5f69-1e44-4409-b781-316b14a28986-etc-swift\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458757 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-scripts\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458777 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-ring-data-devices\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.458797 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-ring-data-devices\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: E1002 11:45:16.459119 4725 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 02 11:45:16 crc kubenswrapper[4725]: E1002 11:45:16.459132 4725 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 02 11:45:16 crc kubenswrapper[4725]: E1002 11:45:16.459165 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift podName:e1fb73ad-22b0-46f9-a5c0-9faba5acb82d nodeName:}" failed. No retries permitted until 2025-10-02 11:45:17.459152282 +0000 UTC m=+1037.366651745 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift") pod "swift-storage-0" (UID: "e1fb73ad-22b0-46f9-a5c0-9faba5acb82d") : configmap "swift-ring-files" not found Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.459827 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa2c5f69-1e44-4409-b781-316b14a28986-etc-swift\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.460260 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-ring-data-devices\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.460597 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-scripts\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.462577 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-dispersionconf\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.463648 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-combined-ca-bundle\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.464203 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-swiftconf\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.474671 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpbfb\" (UniqueName: \"kubernetes.io/projected/fa2c5f69-1e44-4409-b781-316b14a28986-kube-api-access-mpbfb\") pod \"swift-ring-rebalance-7mvhs\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.560802 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-swiftconf\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.560876 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/66f1562e-003f-4f29-a7ba-2c42b823662e-etc-swift\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.560913 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqk2f\" (UniqueName: \"kubernetes.io/projected/66f1562e-003f-4f29-a7ba-2c42b823662e-kube-api-access-zqk2f\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.560989 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-scripts\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.561043 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-combined-ca-bundle\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.561085 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-ring-data-devices\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.561130 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-dispersionconf\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.562198 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-scripts\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.562201 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-ring-data-devices\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.563574 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/66f1562e-003f-4f29-a7ba-2c42b823662e-etc-swift\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.564027 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-swiftconf\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.564337 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-combined-ca-bundle\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.564859 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-dispersionconf\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.577714 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqk2f\" (UniqueName: \"kubernetes.io/projected/66f1562e-003f-4f29-a7ba-2c42b823662e-kube-api-access-zqk2f\") pod \"swift-ring-rebalance-zsfz8\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:16 crc kubenswrapper[4725]: I1002 11:45:16.669501 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.176525 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.187822 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.272113 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-swiftconf\") pod \"fa2c5f69-1e44-4409-b781-316b14a28986\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.272164 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-dispersionconf\") pod \"fa2c5f69-1e44-4409-b781-316b14a28986\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.272232 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-combined-ca-bundle\") pod \"fa2c5f69-1e44-4409-b781-316b14a28986\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.272283 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa2c5f69-1e44-4409-b781-316b14a28986-etc-swift\") pod \"fa2c5f69-1e44-4409-b781-316b14a28986\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.272395 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-scripts\") pod \"fa2c5f69-1e44-4409-b781-316b14a28986\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.272453 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-ring-data-devices\") pod \"fa2c5f69-1e44-4409-b781-316b14a28986\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.272506 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpbfb\" (UniqueName: \"kubernetes.io/projected/fa2c5f69-1e44-4409-b781-316b14a28986-kube-api-access-mpbfb\") pod \"fa2c5f69-1e44-4409-b781-316b14a28986\" (UID: \"fa2c5f69-1e44-4409-b781-316b14a28986\") " Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.273131 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-scripts" (OuterVolumeSpecName: "scripts") pod "fa2c5f69-1e44-4409-b781-316b14a28986" (UID: "fa2c5f69-1e44-4409-b781-316b14a28986"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.273489 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa2c5f69-1e44-4409-b781-316b14a28986-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fa2c5f69-1e44-4409-b781-316b14a28986" (UID: "fa2c5f69-1e44-4409-b781-316b14a28986"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.273658 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "fa2c5f69-1e44-4409-b781-316b14a28986" (UID: "fa2c5f69-1e44-4409-b781-316b14a28986"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.276263 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa2c5f69-1e44-4409-b781-316b14a28986" (UID: "fa2c5f69-1e44-4409-b781-316b14a28986"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.280086 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "fa2c5f69-1e44-4409-b781-316b14a28986" (UID: "fa2c5f69-1e44-4409-b781-316b14a28986"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.280569 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "fa2c5f69-1e44-4409-b781-316b14a28986" (UID: "fa2c5f69-1e44-4409-b781-316b14a28986"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.280801 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2c5f69-1e44-4409-b781-316b14a28986-kube-api-access-mpbfb" (OuterVolumeSpecName: "kube-api-access-mpbfb") pod "fa2c5f69-1e44-4409-b781-316b14a28986" (UID: "fa2c5f69-1e44-4409-b781-316b14a28986"). InnerVolumeSpecName "kube-api-access-mpbfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.374123 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.374399 4725 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa2c5f69-1e44-4409-b781-316b14a28986-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.374493 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpbfb\" (UniqueName: \"kubernetes.io/projected/fa2c5f69-1e44-4409-b781-316b14a28986-kube-api-access-mpbfb\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.374587 4725 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.374678 4725 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.374828 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2c5f69-1e44-4409-b781-316b14a28986-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.374864 4725 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa2c5f69-1e44-4409-b781-316b14a28986-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:17 crc kubenswrapper[4725]: I1002 11:45:17.476260 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:17 crc kubenswrapper[4725]: E1002 11:45:17.476743 4725 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 02 11:45:17 crc kubenswrapper[4725]: E1002 11:45:17.476831 4725 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 02 11:45:17 crc kubenswrapper[4725]: E1002 11:45:17.476936 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift podName:e1fb73ad-22b0-46f9-a5c0-9faba5acb82d nodeName:}" failed. No retries permitted until 2025-10-02 11:45:19.476915988 +0000 UTC m=+1039.384415441 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift") pod "swift-storage-0" (UID: "e1fb73ad-22b0-46f9-a5c0-9faba5acb82d") : configmap "swift-ring-files" not found Oct 02 11:45:18 crc kubenswrapper[4725]: I1002 11:45:18.182805 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7mvhs" Oct 02 11:45:18 crc kubenswrapper[4725]: I1002 11:45:18.244633 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7mvhs"] Oct 02 11:45:18 crc kubenswrapper[4725]: I1002 11:45:18.257018 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-7mvhs"] Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.014113 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.195202 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" event={"ID":"710c6864-7bd0-41ca-b599-3234b6cea3b4","Type":"ContainerDied","Data":"899e29536f3a832d87145b0fc437b57b1802736241117537c3c897c6f5186df8"} Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.195241 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="899e29536f3a832d87145b0fc437b57b1802736241117537c3c897c6f5186df8" Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.195253 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp" Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.204431 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/710c6864-7bd0-41ca-b599-3234b6cea3b4-secret-volume\") pod \"710c6864-7bd0-41ca-b599-3234b6cea3b4\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.204590 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkqds\" (UniqueName: \"kubernetes.io/projected/710c6864-7bd0-41ca-b599-3234b6cea3b4-kube-api-access-rkqds\") pod \"710c6864-7bd0-41ca-b599-3234b6cea3b4\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.204611 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/710c6864-7bd0-41ca-b599-3234b6cea3b4-config-volume\") pod \"710c6864-7bd0-41ca-b599-3234b6cea3b4\" (UID: \"710c6864-7bd0-41ca-b599-3234b6cea3b4\") " Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.205548 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710c6864-7bd0-41ca-b599-3234b6cea3b4-config-volume" (OuterVolumeSpecName: "config-volume") pod "710c6864-7bd0-41ca-b599-3234b6cea3b4" (UID: "710c6864-7bd0-41ca-b599-3234b6cea3b4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:19 crc kubenswrapper[4725]: E1002 11:45:19.207316 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Oct 02 11:45:19 crc kubenswrapper[4725]: E1002 11:45:19.207456 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4fgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(f8dd7ed6-4794-4bf8-8d40-8bb837848eed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 11:45:19 crc kubenswrapper[4725]: E1002 11:45:19.208554 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="f8dd7ed6-4794-4bf8-8d40-8bb837848eed" Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.208886 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/710c6864-7bd0-41ca-b599-3234b6cea3b4-kube-api-access-rkqds" (OuterVolumeSpecName: "kube-api-access-rkqds") pod "710c6864-7bd0-41ca-b599-3234b6cea3b4" (UID: "710c6864-7bd0-41ca-b599-3234b6cea3b4"). InnerVolumeSpecName "kube-api-access-rkqds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.211205 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/710c6864-7bd0-41ca-b599-3234b6cea3b4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "710c6864-7bd0-41ca-b599-3234b6cea3b4" (UID: "710c6864-7bd0-41ca-b599-3234b6cea3b4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:45:19 crc kubenswrapper[4725]: E1002 11:45:19.212796 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Oct 02 11:45:19 crc kubenswrapper[4725]: E1002 11:45:19.213503 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48cv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(a3c790f7-722b-4693-a9cc-ba649c5833ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 11:45:19 crc kubenswrapper[4725]: E1002 11:45:19.215950 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="a3c790f7-722b-4693-a9cc-ba649c5833ca" Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.297492 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa2c5f69-1e44-4409-b781-316b14a28986" path="/var/lib/kubelet/pods/fa2c5f69-1e44-4409-b781-316b14a28986/volumes" Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.307088 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/710c6864-7bd0-41ca-b599-3234b6cea3b4-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.307260 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkqds\" (UniqueName: \"kubernetes.io/projected/710c6864-7bd0-41ca-b599-3234b6cea3b4-kube-api-access-rkqds\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.307289 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/710c6864-7bd0-41ca-b599-3234b6cea3b4-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.513269 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:19 crc kubenswrapper[4725]: E1002 11:45:19.513686 4725 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 02 11:45:19 crc kubenswrapper[4725]: E1002 11:45:19.513700 4725 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 02 11:45:19 crc kubenswrapper[4725]: E1002 11:45:19.513754 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift podName:e1fb73ad-22b0-46f9-a5c0-9faba5acb82d nodeName:}" failed. No retries permitted until 2025-10-02 11:45:23.513738355 +0000 UTC m=+1043.421237818 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift") pod "swift-storage-0" (UID: "e1fb73ad-22b0-46f9-a5c0-9faba5acb82d") : configmap "swift-ring-files" not found Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.656085 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zsfz8"] Oct 02 11:45:19 crc kubenswrapper[4725]: W1002 11:45:19.661379 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66f1562e_003f_4f29_a7ba_2c42b823662e.slice/crio-99efbaaf7c1bd6dfb994f9ed4d0181abfc2a59cd840ce2d21fed3787c05065ec WatchSource:0}: Error finding container 99efbaaf7c1bd6dfb994f9ed4d0181abfc2a59cd840ce2d21fed3787c05065ec: Status 404 returned error can't find the container with id 99efbaaf7c1bd6dfb994f9ed4d0181abfc2a59cd840ce2d21fed3787c05065ec Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.783049 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2mzwk"] Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.797899 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fb8h2"] Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.889501 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l7px"] Oct 02 11:45:19 crc kubenswrapper[4725]: I1002 11:45:19.904112 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2n9b7"] Oct 02 11:45:19 crc kubenswrapper[4725]: W1002 11:45:19.909864 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cb7de96_e482_43c5_ba1f_7d2532e1d516.slice/crio-78a11460bedf070db7dbe98d0caf0d0afbe9fa0312885da871bb701f41707f9e WatchSource:0}: Error finding container 78a11460bedf070db7dbe98d0caf0d0afbe9fa0312885da871bb701f41707f9e: Status 404 returned error can't find the container with id 78a11460bedf070db7dbe98d0caf0d0afbe9fa0312885da871bb701f41707f9e Oct 02 11:45:19 crc kubenswrapper[4725]: W1002 11:45:19.910406 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce83dde7_78f3_4d87_8656_61dd112db89e.slice/crio-4a09a02bb55caec7346b8fa4218c8ea2e6d400798e59b63a33f2d7428104c1a5 WatchSource:0}: Error finding container 4a09a02bb55caec7346b8fa4218c8ea2e6d400798e59b63a33f2d7428104c1a5: Status 404 returned error can't find the container with id 4a09a02bb55caec7346b8fa4218c8ea2e6d400798e59b63a33f2d7428104c1a5 Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.205896 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9085121-a59b-4dbc-95fa-2a61f0432970","Type":"ContainerStarted","Data":"699fe53051f00862376f4b5fbaa65138d0f688db81e03375bee4d7574d9db70a"} Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.208785 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b64d7b7-832c-4a08-96e5-27fcd2c01988","Type":"ContainerStarted","Data":"fc5932def8d06946baec4e92fa8e78f25e3602da1c7f63e99dc30ca6acbddc87"} Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.211432 4725 generic.go:334] "Generic (PLEG): container finished" podID="28eecdbf-fb04-4a0b-8c4c-ff2a2e278614" containerID="029979155bce69272c625d1eef07ed92fc47f07656ead83ed46f5e1cb2f77493" exitCode=0 Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.211480 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" event={"ID":"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614","Type":"ContainerDied","Data":"029979155bce69272c625d1eef07ed92fc47f07656ead83ed46f5e1cb2f77493"} Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.211543 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" event={"ID":"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614","Type":"ContainerStarted","Data":"fad0f99f39b04a8d27fc1a7cdb81f24d82ebf947f9f1f9889c9caa62ece4bdbd"} Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.213127 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2mzwk" event={"ID":"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b","Type":"ContainerStarted","Data":"9fd50f9fb924f309fc9db3ba1512ab991207ccee4d08f90b00d47d6211c83dcd"} Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.214892 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zsfz8" event={"ID":"66f1562e-003f-4f29-a7ba-2c42b823662e","Type":"ContainerStarted","Data":"99efbaaf7c1bd6dfb994f9ed4d0181abfc2a59cd840ce2d21fed3787c05065ec"} Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.216605 4725 generic.go:334] "Generic (PLEG): container finished" podID="ce83dde7-78f3-4d87-8656-61dd112db89e" containerID="ca2aaa98998be115d2d6ae7d1fa88df4d3148cf6f4b96d1dbbf3adda51a895d3" exitCode=0 Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.216670 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2n9b7" event={"ID":"ce83dde7-78f3-4d87-8656-61dd112db89e","Type":"ContainerDied","Data":"ca2aaa98998be115d2d6ae7d1fa88df4d3148cf6f4b96d1dbbf3adda51a895d3"} Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.216693 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2n9b7" event={"ID":"ce83dde7-78f3-4d87-8656-61dd112db89e","Type":"ContainerStarted","Data":"4a09a02bb55caec7346b8fa4218c8ea2e6d400798e59b63a33f2d7428104c1a5"} Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.220399 4725 generic.go:334] "Generic (PLEG): container finished" podID="5cb7de96-e482-43c5-ba1f-7d2532e1d516" containerID="e49d09ed8be9f2649f76543fe00cd7cbbdf0084a25bc09ab572f008efec818a4" exitCode=0 Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.221267 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" event={"ID":"5cb7de96-e482-43c5-ba1f-7d2532e1d516","Type":"ContainerDied","Data":"e49d09ed8be9f2649f76543fe00cd7cbbdf0084a25bc09ab572f008efec818a4"} Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.221289 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" event={"ID":"5cb7de96-e482-43c5-ba1f-7d2532e1d516","Type":"ContainerStarted","Data":"78a11460bedf070db7dbe98d0caf0d0afbe9fa0312885da871bb701f41707f9e"} Oct 02 11:45:20 crc kubenswrapper[4725]: E1002 11:45:20.222878 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="f8dd7ed6-4794-4bf8-8d40-8bb837848eed" Oct 02 11:45:20 crc kubenswrapper[4725]: E1002 11:45:20.223048 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="a3c790f7-722b-4693-a9cc-ba649c5833ca" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.236658 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=31.732257999 podStartE2EDuration="39.236639838s" podCreationTimestamp="2025-10-02 11:44:41 +0000 UTC" firstStartedPulling="2025-10-02 11:44:52.799405734 +0000 UTC m=+1012.706905207" lastFinishedPulling="2025-10-02 11:45:00.303787583 +0000 UTC m=+1020.211287046" observedRunningTime="2025-10-02 11:45:20.230879587 +0000 UTC m=+1040.138379050" watchObservedRunningTime="2025-10-02 11:45:20.236639838 +0000 UTC m=+1040.144139301" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.313441 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=32.72696389 podStartE2EDuration="40.313421902s" podCreationTimestamp="2025-10-02 11:44:40 +0000 UTC" firstStartedPulling="2025-10-02 11:44:52.423552067 +0000 UTC m=+1012.331051530" lastFinishedPulling="2025-10-02 11:45:00.010010079 +0000 UTC m=+1019.917509542" observedRunningTime="2025-10-02 11:45:20.306801728 +0000 UTC m=+1040.214301221" watchObservedRunningTime="2025-10-02 11:45:20.313421902 +0000 UTC m=+1040.220921365" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.576062 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.596291 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:20 crc kubenswrapper[4725]: E1002 11:45:20.601371 4725 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.162:37760->38.129.56.162:34805: write tcp 38.129.56.162:37760->38.129.56.162:34805: write: broken pipe Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.741446 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt25t\" (UniqueName: \"kubernetes.io/projected/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-kube-api-access-xt25t\") pod \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.741533 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-config\") pod \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.741564 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-dns-svc\") pod \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.741661 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-ovsdbserver-nb\") pod \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.741713 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42fnc\" (UniqueName: \"kubernetes.io/projected/5cb7de96-e482-43c5-ba1f-7d2532e1d516-kube-api-access-42fnc\") pod \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.741797 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-dns-svc\") pod \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\" (UID: \"5cb7de96-e482-43c5-ba1f-7d2532e1d516\") " Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.741851 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-nb\") pod \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.741883 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-config\") pod \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.741904 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-sb\") pod \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\" (UID: \"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614\") " Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.745784 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-kube-api-access-xt25t" (OuterVolumeSpecName: "kube-api-access-xt25t") pod "28eecdbf-fb04-4a0b-8c4c-ff2a2e278614" (UID: "28eecdbf-fb04-4a0b-8c4c-ff2a2e278614"). InnerVolumeSpecName "kube-api-access-xt25t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.746317 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb7de96-e482-43c5-ba1f-7d2532e1d516-kube-api-access-42fnc" (OuterVolumeSpecName: "kube-api-access-42fnc") pod "5cb7de96-e482-43c5-ba1f-7d2532e1d516" (UID: "5cb7de96-e482-43c5-ba1f-7d2532e1d516"). InnerVolumeSpecName "kube-api-access-42fnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.761621 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28eecdbf-fb04-4a0b-8c4c-ff2a2e278614" (UID: "28eecdbf-fb04-4a0b-8c4c-ff2a2e278614"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.762239 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-config" (OuterVolumeSpecName: "config") pod "28eecdbf-fb04-4a0b-8c4c-ff2a2e278614" (UID: "28eecdbf-fb04-4a0b-8c4c-ff2a2e278614"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.765435 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28eecdbf-fb04-4a0b-8c4c-ff2a2e278614" (UID: "28eecdbf-fb04-4a0b-8c4c-ff2a2e278614"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.766355 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-config" (OuterVolumeSpecName: "config") pod "5cb7de96-e482-43c5-ba1f-7d2532e1d516" (UID: "5cb7de96-e482-43c5-ba1f-7d2532e1d516"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.768266 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5cb7de96-e482-43c5-ba1f-7d2532e1d516" (UID: "5cb7de96-e482-43c5-ba1f-7d2532e1d516"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.771886 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28eecdbf-fb04-4a0b-8c4c-ff2a2e278614" (UID: "28eecdbf-fb04-4a0b-8c4c-ff2a2e278614"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.772298 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5cb7de96-e482-43c5-ba1f-7d2532e1d516" (UID: "5cb7de96-e482-43c5-ba1f-7d2532e1d516"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.843495 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42fnc\" (UniqueName: \"kubernetes.io/projected/5cb7de96-e482-43c5-ba1f-7d2532e1d516-kube-api-access-42fnc\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.843534 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.843544 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.843552 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.843560 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.843568 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt25t\" (UniqueName: \"kubernetes.io/projected/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-kube-api-access-xt25t\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.843577 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.843586 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:20 crc kubenswrapper[4725]: I1002 11:45:20.843594 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cb7de96-e482-43c5-ba1f-7d2532e1d516-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.228889 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.228903 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fb8h2" event={"ID":"28eecdbf-fb04-4a0b-8c4c-ff2a2e278614","Type":"ContainerDied","Data":"fad0f99f39b04a8d27fc1a7cdb81f24d82ebf947f9f1f9889c9caa62ece4bdbd"} Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.229522 4725 scope.go:117] "RemoveContainer" containerID="029979155bce69272c625d1eef07ed92fc47f07656ead83ed46f5e1cb2f77493" Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.232557 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2n9b7" event={"ID":"ce83dde7-78f3-4d87-8656-61dd112db89e","Type":"ContainerStarted","Data":"cc3da6add29aa07ed3702e814aa85e3574b8a657a02e9d2e63051e0e2be3df5a"} Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.233550 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.236057 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" event={"ID":"5cb7de96-e482-43c5-ba1f-7d2532e1d516","Type":"ContainerDied","Data":"78a11460bedf070db7dbe98d0caf0d0afbe9fa0312885da871bb701f41707f9e"} Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.236135 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.259945 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-2n9b7" podStartSLOduration=7.25992322 podStartE2EDuration="7.25992322s" podCreationTimestamp="2025-10-02 11:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:45:21.254400864 +0000 UTC m=+1041.161900327" watchObservedRunningTime="2025-10-02 11:45:21.25992322 +0000 UTC m=+1041.167422673" Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.263048 4725 scope.go:117] "RemoveContainer" containerID="e49d09ed8be9f2649f76543fe00cd7cbbdf0084a25bc09ab572f008efec818a4" Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.355976 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fb8h2"] Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.362546 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fb8h2"] Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.814763 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 02 11:45:21 crc kubenswrapper[4725]: I1002 11:45:21.872170 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.061633 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.062583 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.248568 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a3c790f7-722b-4693-a9cc-ba649c5833ca","Type":"ContainerStarted","Data":"a123b110a841a194d087f70dae4704169987c1c305c06eba71caedd4335b9386"} Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.249692 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.260929 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2mzwk" event={"ID":"d765fdd7-c196-4fdc-b5ae-813c10a8bd2b","Type":"ContainerStarted","Data":"832e9c0bc0391db1db3e66c6ed2176e218df052b677eaf8118009bc7f90ca610"} Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.262046 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.262485 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.281812 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=27.721609861 podStartE2EDuration="35.281793234s" podCreationTimestamp="2025-10-02 11:44:47 +0000 UTC" firstStartedPulling="2025-10-02 11:44:52.559209414 +0000 UTC m=+1012.466708877" lastFinishedPulling="2025-10-02 11:45:00.119392767 +0000 UTC m=+1020.026892250" observedRunningTime="2025-10-02 11:45:22.27517767 +0000 UTC m=+1042.182677393" watchObservedRunningTime="2025-10-02 11:45:22.281793234 +0000 UTC m=+1042.189292697" Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.308849 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.312966 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.322070 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2mzwk" podStartSLOduration=10.038562585 podStartE2EDuration="11.322049215s" podCreationTimestamp="2025-10-02 11:45:11 +0000 UTC" firstStartedPulling="2025-10-02 11:45:19.795939742 +0000 UTC m=+1039.703439205" lastFinishedPulling="2025-10-02 11:45:21.079426352 +0000 UTC m=+1040.986925835" observedRunningTime="2025-10-02 11:45:22.316423576 +0000 UTC m=+1042.223923039" watchObservedRunningTime="2025-10-02 11:45:22.322049215 +0000 UTC m=+1042.229548688" Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.428835 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 02 11:45:22 crc kubenswrapper[4725]: I1002 11:45:22.428939 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 02 11:45:23 crc kubenswrapper[4725]: I1002 11:45:23.281442 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28eecdbf-fb04-4a0b-8c4c-ff2a2e278614" path="/var/lib/kubelet/pods/28eecdbf-fb04-4a0b-8c4c-ff2a2e278614/volumes" Oct 02 11:45:23 crc kubenswrapper[4725]: I1002 11:45:23.595255 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:23 crc kubenswrapper[4725]: E1002 11:45:23.595444 4725 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 02 11:45:23 crc kubenswrapper[4725]: E1002 11:45:23.595839 4725 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 02 11:45:23 crc kubenswrapper[4725]: E1002 11:45:23.595889 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift podName:e1fb73ad-22b0-46f9-a5c0-9faba5acb82d nodeName:}" failed. No retries permitted until 2025-10-02 11:45:31.59587414 +0000 UTC m=+1051.503373603 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift") pod "swift-storage-0" (UID: "e1fb73ad-22b0-46f9-a5c0-9faba5acb82d") : configmap "swift-ring-files" not found Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.276770 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zsfz8" event={"ID":"66f1562e-003f-4f29-a7ba-2c42b823662e","Type":"ContainerStarted","Data":"44b169d9d62432f5a404ab3ed4d4d1fd0785624c005565f0726e9d3c50b64115"} Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.278323 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f8dd7ed6-4794-4bf8-8d40-8bb837848eed","Type":"ContainerStarted","Data":"92203398b0327dc6883b97ed936359d794be7621e71715a6cd11654e7089757e"} Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.311751 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-zsfz8" podStartSLOduration=4.065870637 podStartE2EDuration="8.311734958s" podCreationTimestamp="2025-10-02 11:45:16 +0000 UTC" firstStartedPulling="2025-10-02 11:45:19.666687356 +0000 UTC m=+1039.574186819" lastFinishedPulling="2025-10-02 11:45:23.912551677 +0000 UTC m=+1043.820051140" observedRunningTime="2025-10-02 11:45:24.307110887 +0000 UTC m=+1044.214610380" watchObservedRunningTime="2025-10-02 11:45:24.311734958 +0000 UTC m=+1044.219234421" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.344250 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=26.954914104 podStartE2EDuration="34.344190924s" podCreationTimestamp="2025-10-02 11:44:50 +0000 UTC" firstStartedPulling="2025-10-02 11:44:52.975644706 +0000 UTC m=+1012.883144169" lastFinishedPulling="2025-10-02 11:45:00.364921526 +0000 UTC m=+1020.272420989" observedRunningTime="2025-10-02 11:45:24.334403386 +0000 UTC m=+1044.241902890" watchObservedRunningTime="2025-10-02 11:45:24.344190924 +0000 UTC m=+1044.251690427" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.348667 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.494656 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.519752 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 02 11:45:24 crc kubenswrapper[4725]: E1002 11:45:24.520106 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28eecdbf-fb04-4a0b-8c4c-ff2a2e278614" containerName="init" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.520145 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="28eecdbf-fb04-4a0b-8c4c-ff2a2e278614" containerName="init" Oct 02 11:45:24 crc kubenswrapper[4725]: E1002 11:45:24.520184 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="710c6864-7bd0-41ca-b599-3234b6cea3b4" containerName="collect-profiles" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.520196 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="710c6864-7bd0-41ca-b599-3234b6cea3b4" containerName="collect-profiles" Oct 02 11:45:24 crc kubenswrapper[4725]: E1002 11:45:24.520212 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb7de96-e482-43c5-ba1f-7d2532e1d516" containerName="init" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.520218 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb7de96-e482-43c5-ba1f-7d2532e1d516" containerName="init" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.520394 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="710c6864-7bd0-41ca-b599-3234b6cea3b4" containerName="collect-profiles" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.520405 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="28eecdbf-fb04-4a0b-8c4c-ff2a2e278614" containerName="init" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.520416 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb7de96-e482-43c5-ba1f-7d2532e1d516" containerName="init" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.522381 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.524995 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-v25kh" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.525826 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.525974 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.526105 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.556102 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.590871 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.612624 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.612811 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-scripts\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.612840 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-config\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.612875 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.612920 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.612942 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.613051 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb5r2\" (UniqueName: \"kubernetes.io/projected/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-kube-api-access-gb5r2\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.713935 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-scripts\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.713987 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-config\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.714015 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.714056 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.714078 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.714154 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb5r2\" (UniqueName: \"kubernetes.io/projected/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-kube-api-access-gb5r2\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.714189 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.714811 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.715275 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-config\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.715447 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-scripts\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.719608 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.719671 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.719684 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.732617 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb5r2\" (UniqueName: \"kubernetes.io/projected/6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041-kube-api-access-gb5r2\") pod \"ovn-northd-0\" (UID: \"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041\") " pod="openstack/ovn-northd-0" Oct 02 11:45:24 crc kubenswrapper[4725]: I1002 11:45:24.843302 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.112195 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.154401 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gtw6b"] Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.154610 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" podUID="942ac9b9-85b4-45a7-8048-2d97a3fdd353" containerName="dnsmasq-dns" containerID="cri-o://612576d0270ba76b60e7fd11eb469a5f476f5ca2f541acf7f6cd882673577519" gracePeriod=10 Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.298522 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.307327 4725 generic.go:334] "Generic (PLEG): container finished" podID="942ac9b9-85b4-45a7-8048-2d97a3fdd353" containerID="612576d0270ba76b60e7fd11eb469a5f476f5ca2f541acf7f6cd882673577519" exitCode=0 Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.308334 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" event={"ID":"942ac9b9-85b4-45a7-8048-2d97a3fdd353","Type":"ContainerDied","Data":"612576d0270ba76b60e7fd11eb469a5f476f5ca2f541acf7f6cd882673577519"} Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.595201 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.729149 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-dns-svc\") pod \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.729295 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-config\") pod \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.729338 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zft94\" (UniqueName: \"kubernetes.io/projected/942ac9b9-85b4-45a7-8048-2d97a3fdd353-kube-api-access-zft94\") pod \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\" (UID: \"942ac9b9-85b4-45a7-8048-2d97a3fdd353\") " Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.734890 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/942ac9b9-85b4-45a7-8048-2d97a3fdd353-kube-api-access-zft94" (OuterVolumeSpecName: "kube-api-access-zft94") pod "942ac9b9-85b4-45a7-8048-2d97a3fdd353" (UID: "942ac9b9-85b4-45a7-8048-2d97a3fdd353"). InnerVolumeSpecName "kube-api-access-zft94". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.763407 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-config" (OuterVolumeSpecName: "config") pod "942ac9b9-85b4-45a7-8048-2d97a3fdd353" (UID: "942ac9b9-85b4-45a7-8048-2d97a3fdd353"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.779499 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "942ac9b9-85b4-45a7-8048-2d97a3fdd353" (UID: "942ac9b9-85b4-45a7-8048-2d97a3fdd353"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.831455 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.831496 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942ac9b9-85b4-45a7-8048-2d97a3fdd353-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:25 crc kubenswrapper[4725]: I1002 11:45:25.831518 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zft94\" (UniqueName: \"kubernetes.io/projected/942ac9b9-85b4-45a7-8048-2d97a3fdd353-kube-api-access-zft94\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:26 crc kubenswrapper[4725]: I1002 11:45:26.163669 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 02 11:45:26 crc kubenswrapper[4725]: I1002 11:45:26.221340 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 02 11:45:26 crc kubenswrapper[4725]: I1002 11:45:26.319797 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" event={"ID":"942ac9b9-85b4-45a7-8048-2d97a3fdd353","Type":"ContainerDied","Data":"e469e2c718fb9357c4151b9c3075de5c169498082f0d89327c383656dc72bcb2"} Oct 02 11:45:26 crc kubenswrapper[4725]: I1002 11:45:26.319833 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gtw6b" Oct 02 11:45:26 crc kubenswrapper[4725]: I1002 11:45:26.319858 4725 scope.go:117] "RemoveContainer" containerID="612576d0270ba76b60e7fd11eb469a5f476f5ca2f541acf7f6cd882673577519" Oct 02 11:45:26 crc kubenswrapper[4725]: I1002 11:45:26.322039 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041","Type":"ContainerStarted","Data":"484b54d89173ec0c59e801e741af1168b30f603a1581f3ef034ea338fb9bbdd1"} Oct 02 11:45:26 crc kubenswrapper[4725]: I1002 11:45:26.365070 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gtw6b"] Oct 02 11:45:26 crc kubenswrapper[4725]: I1002 11:45:26.371689 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gtw6b"] Oct 02 11:45:26 crc kubenswrapper[4725]: I1002 11:45:26.455367 4725 scope.go:117] "RemoveContainer" containerID="52fb34d835875159f4396400c295a5dfd4160189aa2d6c6abb6b05ba55304976" Oct 02 11:45:27 crc kubenswrapper[4725]: I1002 11:45:27.278078 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="942ac9b9-85b4-45a7-8048-2d97a3fdd353" path="/var/lib/kubelet/pods/942ac9b9-85b4-45a7-8048-2d97a3fdd353/volumes" Oct 02 11:45:27 crc kubenswrapper[4725]: I1002 11:45:27.333567 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041","Type":"ContainerStarted","Data":"c2402169d77f4184cd88d5a5118f53f05711f3e9fcd9cb238dde541c97e7592e"} Oct 02 11:45:27 crc kubenswrapper[4725]: I1002 11:45:27.333628 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041","Type":"ContainerStarted","Data":"c4f4322826b395ae6607fb38fe2bfe10113f6e78dde30d10bb7828dce8538680"} Oct 02 11:45:27 crc kubenswrapper[4725]: I1002 11:45:27.333776 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 02 11:45:27 crc kubenswrapper[4725]: I1002 11:45:27.363165 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.163566509 podStartE2EDuration="3.363143947s" podCreationTimestamp="2025-10-02 11:45:24 +0000 UTC" firstStartedPulling="2025-10-02 11:45:25.320171739 +0000 UTC m=+1045.227671202" lastFinishedPulling="2025-10-02 11:45:26.519749177 +0000 UTC m=+1046.427248640" observedRunningTime="2025-10-02 11:45:27.356662256 +0000 UTC m=+1047.264161739" watchObservedRunningTime="2025-10-02 11:45:27.363143947 +0000 UTC m=+1047.270643420" Oct 02 11:45:28 crc kubenswrapper[4725]: I1002 11:45:28.332545 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fnh89"] Oct 02 11:45:28 crc kubenswrapper[4725]: E1002 11:45:28.334699 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="942ac9b9-85b4-45a7-8048-2d97a3fdd353" containerName="dnsmasq-dns" Oct 02 11:45:28 crc kubenswrapper[4725]: I1002 11:45:28.334734 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="942ac9b9-85b4-45a7-8048-2d97a3fdd353" containerName="dnsmasq-dns" Oct 02 11:45:28 crc kubenswrapper[4725]: E1002 11:45:28.334758 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="942ac9b9-85b4-45a7-8048-2d97a3fdd353" containerName="init" Oct 02 11:45:28 crc kubenswrapper[4725]: I1002 11:45:28.334765 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="942ac9b9-85b4-45a7-8048-2d97a3fdd353" containerName="init" Oct 02 11:45:28 crc kubenswrapper[4725]: I1002 11:45:28.334951 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="942ac9b9-85b4-45a7-8048-2d97a3fdd353" containerName="dnsmasq-dns" Oct 02 11:45:28 crc kubenswrapper[4725]: I1002 11:45:28.335488 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fnh89" Oct 02 11:45:28 crc kubenswrapper[4725]: I1002 11:45:28.340579 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fnh89"] Oct 02 11:45:28 crc kubenswrapper[4725]: I1002 11:45:28.474685 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz527\" (UniqueName: \"kubernetes.io/projected/90069d14-eefb-4a14-bc3a-1d553dd8cb85-kube-api-access-pz527\") pod \"glance-db-create-fnh89\" (UID: \"90069d14-eefb-4a14-bc3a-1d553dd8cb85\") " pod="openstack/glance-db-create-fnh89" Oct 02 11:45:28 crc kubenswrapper[4725]: I1002 11:45:28.577881 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz527\" (UniqueName: \"kubernetes.io/projected/90069d14-eefb-4a14-bc3a-1d553dd8cb85-kube-api-access-pz527\") pod \"glance-db-create-fnh89\" (UID: \"90069d14-eefb-4a14-bc3a-1d553dd8cb85\") " pod="openstack/glance-db-create-fnh89" Oct 02 11:45:28 crc kubenswrapper[4725]: I1002 11:45:28.603141 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz527\" (UniqueName: \"kubernetes.io/projected/90069d14-eefb-4a14-bc3a-1d553dd8cb85-kube-api-access-pz527\") pod \"glance-db-create-fnh89\" (UID: \"90069d14-eefb-4a14-bc3a-1d553dd8cb85\") " pod="openstack/glance-db-create-fnh89" Oct 02 11:45:28 crc kubenswrapper[4725]: I1002 11:45:28.654210 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fnh89" Oct 02 11:45:29 crc kubenswrapper[4725]: I1002 11:45:29.145870 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fnh89"] Oct 02 11:45:29 crc kubenswrapper[4725]: W1002 11:45:29.164494 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90069d14_eefb_4a14_bc3a_1d553dd8cb85.slice/crio-5a5043cdb1818de4ce1012baf77b9fd16ea636688b82ae3c4cba374ce76993e4 WatchSource:0}: Error finding container 5a5043cdb1818de4ce1012baf77b9fd16ea636688b82ae3c4cba374ce76993e4: Status 404 returned error can't find the container with id 5a5043cdb1818de4ce1012baf77b9fd16ea636688b82ae3c4cba374ce76993e4 Oct 02 11:45:29 crc kubenswrapper[4725]: I1002 11:45:29.351406 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fnh89" event={"ID":"90069d14-eefb-4a14-bc3a-1d553dd8cb85","Type":"ContainerStarted","Data":"d0d624d8482b2be82923c2adb0fbeebb9cc8420cf663515b8f9dc04e5e207e2e"} Oct 02 11:45:29 crc kubenswrapper[4725]: I1002 11:45:29.351459 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fnh89" event={"ID":"90069d14-eefb-4a14-bc3a-1d553dd8cb85","Type":"ContainerStarted","Data":"5a5043cdb1818de4ce1012baf77b9fd16ea636688b82ae3c4cba374ce76993e4"} Oct 02 11:45:29 crc kubenswrapper[4725]: I1002 11:45:29.368642 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-fnh89" podStartSLOduration=1.368624077 podStartE2EDuration="1.368624077s" podCreationTimestamp="2025-10-02 11:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:45:29.363169383 +0000 UTC m=+1049.270668876" watchObservedRunningTime="2025-10-02 11:45:29.368624077 +0000 UTC m=+1049.276123550" Oct 02 11:45:30 crc kubenswrapper[4725]: I1002 11:45:30.361628 4725 generic.go:334] "Generic (PLEG): container finished" podID="90069d14-eefb-4a14-bc3a-1d553dd8cb85" containerID="d0d624d8482b2be82923c2adb0fbeebb9cc8420cf663515b8f9dc04e5e207e2e" exitCode=0 Oct 02 11:45:30 crc kubenswrapper[4725]: I1002 11:45:30.361814 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fnh89" event={"ID":"90069d14-eefb-4a14-bc3a-1d553dd8cb85","Type":"ContainerDied","Data":"d0d624d8482b2be82923c2adb0fbeebb9cc8420cf663515b8f9dc04e5e207e2e"} Oct 02 11:45:31 crc kubenswrapper[4725]: I1002 11:45:31.376566 4725 generic.go:334] "Generic (PLEG): container finished" podID="66f1562e-003f-4f29-a7ba-2c42b823662e" containerID="44b169d9d62432f5a404ab3ed4d4d1fd0785624c005565f0726e9d3c50b64115" exitCode=0 Oct 02 11:45:31 crc kubenswrapper[4725]: I1002 11:45:31.376669 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zsfz8" event={"ID":"66f1562e-003f-4f29-a7ba-2c42b823662e","Type":"ContainerDied","Data":"44b169d9d62432f5a404ab3ed4d4d1fd0785624c005565f0726e9d3c50b64115"} Oct 02 11:45:31 crc kubenswrapper[4725]: I1002 11:45:31.642030 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:31 crc kubenswrapper[4725]: I1002 11:45:31.653892 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e1fb73ad-22b0-46f9-a5c0-9faba5acb82d-etc-swift\") pod \"swift-storage-0\" (UID: \"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d\") " pod="openstack/swift-storage-0" Oct 02 11:45:31 crc kubenswrapper[4725]: I1002 11:45:31.715516 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fnh89" Oct 02 11:45:31 crc kubenswrapper[4725]: I1002 11:45:31.731187 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 02 11:45:31 crc kubenswrapper[4725]: I1002 11:45:31.844439 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz527\" (UniqueName: \"kubernetes.io/projected/90069d14-eefb-4a14-bc3a-1d553dd8cb85-kube-api-access-pz527\") pod \"90069d14-eefb-4a14-bc3a-1d553dd8cb85\" (UID: \"90069d14-eefb-4a14-bc3a-1d553dd8cb85\") " Oct 02 11:45:31 crc kubenswrapper[4725]: I1002 11:45:31.847936 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90069d14-eefb-4a14-bc3a-1d553dd8cb85-kube-api-access-pz527" (OuterVolumeSpecName: "kube-api-access-pz527") pod "90069d14-eefb-4a14-bc3a-1d553dd8cb85" (UID: "90069d14-eefb-4a14-bc3a-1d553dd8cb85"). InnerVolumeSpecName "kube-api-access-pz527". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:31 crc kubenswrapper[4725]: I1002 11:45:31.946345 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz527\" (UniqueName: \"kubernetes.io/projected/90069d14-eefb-4a14-bc3a-1d553dd8cb85-kube-api-access-pz527\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.349180 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 02 11:45:32 crc kubenswrapper[4725]: W1002 11:45:32.351454 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1fb73ad_22b0_46f9_a5c0_9faba5acb82d.slice/crio-d5c768247ef620ac2b1a58c4e16d85f3fd8c1edec42a8e881e5c194396200417 WatchSource:0}: Error finding container d5c768247ef620ac2b1a58c4e16d85f3fd8c1edec42a8e881e5c194396200417: Status 404 returned error can't find the container with id d5c768247ef620ac2b1a58c4e16d85f3fd8c1edec42a8e881e5c194396200417 Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.386248 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fnh89" event={"ID":"90069d14-eefb-4a14-bc3a-1d553dd8cb85","Type":"ContainerDied","Data":"5a5043cdb1818de4ce1012baf77b9fd16ea636688b82ae3c4cba374ce76993e4"} Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.386303 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a5043cdb1818de4ce1012baf77b9fd16ea636688b82ae3c4cba374ce76993e4" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.386268 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fnh89" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.387447 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"d5c768247ef620ac2b1a58c4e16d85f3fd8c1edec42a8e881e5c194396200417"} Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.542334 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qskpf"] Oct 02 11:45:32 crc kubenswrapper[4725]: E1002 11:45:32.542670 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90069d14-eefb-4a14-bc3a-1d553dd8cb85" containerName="mariadb-database-create" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.542682 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="90069d14-eefb-4a14-bc3a-1d553dd8cb85" containerName="mariadb-database-create" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.542856 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="90069d14-eefb-4a14-bc3a-1d553dd8cb85" containerName="mariadb-database-create" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.543396 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qskpf" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.567602 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qskpf"] Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.644775 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.657561 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7jb\" (UniqueName: \"kubernetes.io/projected/9e2149e6-a89e-4464-aab7-4acc9b060ed3-kube-api-access-sn7jb\") pod \"keystone-db-create-qskpf\" (UID: \"9e2149e6-a89e-4464-aab7-4acc9b060ed3\") " pod="openstack/keystone-db-create-qskpf" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.758962 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-scripts\") pod \"66f1562e-003f-4f29-a7ba-2c42b823662e\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.759322 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/66f1562e-003f-4f29-a7ba-2c42b823662e-etc-swift\") pod \"66f1562e-003f-4f29-a7ba-2c42b823662e\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.759350 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-ring-data-devices\") pod \"66f1562e-003f-4f29-a7ba-2c42b823662e\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.759406 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqk2f\" (UniqueName: \"kubernetes.io/projected/66f1562e-003f-4f29-a7ba-2c42b823662e-kube-api-access-zqk2f\") pod \"66f1562e-003f-4f29-a7ba-2c42b823662e\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.759478 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-dispersionconf\") pod \"66f1562e-003f-4f29-a7ba-2c42b823662e\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.759544 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-swiftconf\") pod \"66f1562e-003f-4f29-a7ba-2c42b823662e\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.759600 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-combined-ca-bundle\") pod \"66f1562e-003f-4f29-a7ba-2c42b823662e\" (UID: \"66f1562e-003f-4f29-a7ba-2c42b823662e\") " Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.759829 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn7jb\" (UniqueName: \"kubernetes.io/projected/9e2149e6-a89e-4464-aab7-4acc9b060ed3-kube-api-access-sn7jb\") pod \"keystone-db-create-qskpf\" (UID: \"9e2149e6-a89e-4464-aab7-4acc9b060ed3\") " pod="openstack/keystone-db-create-qskpf" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.760520 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "66f1562e-003f-4f29-a7ba-2c42b823662e" (UID: "66f1562e-003f-4f29-a7ba-2c42b823662e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.760957 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66f1562e-003f-4f29-a7ba-2c42b823662e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "66f1562e-003f-4f29-a7ba-2c42b823662e" (UID: "66f1562e-003f-4f29-a7ba-2c42b823662e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.775156 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f1562e-003f-4f29-a7ba-2c42b823662e-kube-api-access-zqk2f" (OuterVolumeSpecName: "kube-api-access-zqk2f") pod "66f1562e-003f-4f29-a7ba-2c42b823662e" (UID: "66f1562e-003f-4f29-a7ba-2c42b823662e"). InnerVolumeSpecName "kube-api-access-zqk2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.776144 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "66f1562e-003f-4f29-a7ba-2c42b823662e" (UID: "66f1562e-003f-4f29-a7ba-2c42b823662e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.776685 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn7jb\" (UniqueName: \"kubernetes.io/projected/9e2149e6-a89e-4464-aab7-4acc9b060ed3-kube-api-access-sn7jb\") pod \"keystone-db-create-qskpf\" (UID: \"9e2149e6-a89e-4464-aab7-4acc9b060ed3\") " pod="openstack/keystone-db-create-qskpf" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.782220 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-scripts" (OuterVolumeSpecName: "scripts") pod "66f1562e-003f-4f29-a7ba-2c42b823662e" (UID: "66f1562e-003f-4f29-a7ba-2c42b823662e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.785485 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "66f1562e-003f-4f29-a7ba-2c42b823662e" (UID: "66f1562e-003f-4f29-a7ba-2c42b823662e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.802548 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66f1562e-003f-4f29-a7ba-2c42b823662e" (UID: "66f1562e-003f-4f29-a7ba-2c42b823662e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.862103 4725 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.862145 4725 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.862158 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f1562e-003f-4f29-a7ba-2c42b823662e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.862172 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.862185 4725 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/66f1562e-003f-4f29-a7ba-2c42b823662e-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.862197 4725 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/66f1562e-003f-4f29-a7ba-2c42b823662e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.862210 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqk2f\" (UniqueName: \"kubernetes.io/projected/66f1562e-003f-4f29-a7ba-2c42b823662e-kube-api-access-zqk2f\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.865470 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qskpf" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.912972 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7jtpb"] Oct 02 11:45:32 crc kubenswrapper[4725]: E1002 11:45:32.913594 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f1562e-003f-4f29-a7ba-2c42b823662e" containerName="swift-ring-rebalance" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.913619 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f1562e-003f-4f29-a7ba-2c42b823662e" containerName="swift-ring-rebalance" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.913996 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f1562e-003f-4f29-a7ba-2c42b823662e" containerName="swift-ring-rebalance" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.914883 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7jtpb" Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.919678 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7jtpb"] Oct 02 11:45:32 crc kubenswrapper[4725]: I1002 11:45:32.963266 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmbmx\" (UniqueName: \"kubernetes.io/projected/844997fc-7d91-4705-8364-46690b714963-kube-api-access-jmbmx\") pod \"placement-db-create-7jtpb\" (UID: \"844997fc-7d91-4705-8364-46690b714963\") " pod="openstack/placement-db-create-7jtpb" Oct 02 11:45:33 crc kubenswrapper[4725]: I1002 11:45:33.064616 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmbmx\" (UniqueName: \"kubernetes.io/projected/844997fc-7d91-4705-8364-46690b714963-kube-api-access-jmbmx\") pod \"placement-db-create-7jtpb\" (UID: \"844997fc-7d91-4705-8364-46690b714963\") " pod="openstack/placement-db-create-7jtpb" Oct 02 11:45:33 crc kubenswrapper[4725]: I1002 11:45:33.086358 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmbmx\" (UniqueName: \"kubernetes.io/projected/844997fc-7d91-4705-8364-46690b714963-kube-api-access-jmbmx\") pod \"placement-db-create-7jtpb\" (UID: \"844997fc-7d91-4705-8364-46690b714963\") " pod="openstack/placement-db-create-7jtpb" Oct 02 11:45:33 crc kubenswrapper[4725]: I1002 11:45:33.272285 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7jtpb" Oct 02 11:45:33 crc kubenswrapper[4725]: W1002 11:45:33.277571 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e2149e6_a89e_4464_aab7_4acc9b060ed3.slice/crio-bb82e8f8d057b08fcb40fc4dcc27b68bfc6cf8f27555e07aad5b99ad4a5e4cfd WatchSource:0}: Error finding container bb82e8f8d057b08fcb40fc4dcc27b68bfc6cf8f27555e07aad5b99ad4a5e4cfd: Status 404 returned error can't find the container with id bb82e8f8d057b08fcb40fc4dcc27b68bfc6cf8f27555e07aad5b99ad4a5e4cfd Oct 02 11:45:33 crc kubenswrapper[4725]: I1002 11:45:33.279933 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qskpf"] Oct 02 11:45:33 crc kubenswrapper[4725]: I1002 11:45:33.398297 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zsfz8" event={"ID":"66f1562e-003f-4f29-a7ba-2c42b823662e","Type":"ContainerDied","Data":"99efbaaf7c1bd6dfb994f9ed4d0181abfc2a59cd840ce2d21fed3787c05065ec"} Oct 02 11:45:33 crc kubenswrapper[4725]: I1002 11:45:33.398366 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99efbaaf7c1bd6dfb994f9ed4d0181abfc2a59cd840ce2d21fed3787c05065ec" Oct 02 11:45:33 crc kubenswrapper[4725]: I1002 11:45:33.398451 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zsfz8" Oct 02 11:45:33 crc kubenswrapper[4725]: I1002 11:45:33.401938 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qskpf" event={"ID":"9e2149e6-a89e-4464-aab7-4acc9b060ed3","Type":"ContainerStarted","Data":"bb82e8f8d057b08fcb40fc4dcc27b68bfc6cf8f27555e07aad5b99ad4a5e4cfd"} Oct 02 11:45:33 crc kubenswrapper[4725]: I1002 11:45:33.965856 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7jtpb"] Oct 02 11:45:33 crc kubenswrapper[4725]: W1002 11:45:33.978970 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod844997fc_7d91_4705_8364_46690b714963.slice/crio-e8a0b4571868e0aafbfb3f211dbb81a0c595fefedc6d0428551d3807b4f245c2 WatchSource:0}: Error finding container e8a0b4571868e0aafbfb3f211dbb81a0c595fefedc6d0428551d3807b4f245c2: Status 404 returned error can't find the container with id e8a0b4571868e0aafbfb3f211dbb81a0c595fefedc6d0428551d3807b4f245c2 Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.061169 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.063009 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-98gqf" podUID="ba80438e-e220-487f-b365-27a8224c7ef2" containerName="ovn-controller" probeResult="failure" output=< Oct 02 11:45:34 crc kubenswrapper[4725]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 02 11:45:34 crc kubenswrapper[4725]: > Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.063720 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b2r45" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.279244 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-98gqf-config-qjkdc"] Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.281061 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.283133 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.293565 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-98gqf-config-qjkdc"] Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.398047 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-additional-scripts\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.398100 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-scripts\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.398156 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run-ovn\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.398195 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnpkk\" (UniqueName: \"kubernetes.io/projected/3b448599-289a-4e48-b317-d3ded15badd2-kube-api-access-xnpkk\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.398227 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-log-ovn\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.398269 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.425316 4725 generic.go:334] "Generic (PLEG): container finished" podID="f6e13124-0d0a-48a8-a1a7-0127e60454e1" containerID="bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d" exitCode=0 Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.425390 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f6e13124-0d0a-48a8-a1a7-0127e60454e1","Type":"ContainerDied","Data":"bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d"} Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.428606 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"697099d83011b4a89003a2070f3ef0e9abf42d3dc398a0232e14ecfba4ab17ef"} Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.428648 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"481f55738ab055baee0bf6ee50f53d71786874720d79bb27d50b127ff3e423d3"} Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.428659 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"baf1d0487939bf7a115757b5035fd562c7993e5ee80480c536cd5ec7abcd5644"} Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.437235 4725 generic.go:334] "Generic (PLEG): container finished" podID="9e2149e6-a89e-4464-aab7-4acc9b060ed3" containerID="35e485dbc50d40a461165d015bffcfa1cb39dd455c19a91aac3f984115663925" exitCode=0 Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.437340 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qskpf" event={"ID":"9e2149e6-a89e-4464-aab7-4acc9b060ed3","Type":"ContainerDied","Data":"35e485dbc50d40a461165d015bffcfa1cb39dd455c19a91aac3f984115663925"} Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.438762 4725 generic.go:334] "Generic (PLEG): container finished" podID="844997fc-7d91-4705-8364-46690b714963" containerID="8eff4c84e3fdd2f0678dff12c75231c48957010fcf87a44196cfeaabfd8425e0" exitCode=0 Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.438835 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7jtpb" event={"ID":"844997fc-7d91-4705-8364-46690b714963","Type":"ContainerDied","Data":"8eff4c84e3fdd2f0678dff12c75231c48957010fcf87a44196cfeaabfd8425e0"} Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.438887 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7jtpb" event={"ID":"844997fc-7d91-4705-8364-46690b714963","Type":"ContainerStarted","Data":"e8a0b4571868e0aafbfb3f211dbb81a0c595fefedc6d0428551d3807b4f245c2"} Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.499598 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-scripts\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.499914 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run-ovn\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.500007 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnpkk\" (UniqueName: \"kubernetes.io/projected/3b448599-289a-4e48-b317-d3ded15badd2-kube-api-access-xnpkk\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.500071 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-log-ovn\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.500162 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.500202 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-additional-scripts\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.500308 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run-ovn\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.500416 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.500487 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-log-ovn\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.501140 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-additional-scripts\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.502614 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-scripts\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.520396 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnpkk\" (UniqueName: \"kubernetes.io/projected/3b448599-289a-4e48-b317-d3ded15badd2-kube-api-access-xnpkk\") pod \"ovn-controller-98gqf-config-qjkdc\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:34 crc kubenswrapper[4725]: I1002 11:45:34.649862 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:35 crc kubenswrapper[4725]: I1002 11:45:35.089397 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-98gqf-config-qjkdc"] Oct 02 11:45:35 crc kubenswrapper[4725]: I1002 11:45:35.446986 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f6e13124-0d0a-48a8-a1a7-0127e60454e1","Type":"ContainerStarted","Data":"f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f"} Oct 02 11:45:35 crc kubenswrapper[4725]: I1002 11:45:35.447393 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 02 11:45:35 crc kubenswrapper[4725]: I1002 11:45:35.449463 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"6a22fbd24397f0961c856f393d4636869ddb951622d42874c1735c07b7bc621d"} Oct 02 11:45:35 crc kubenswrapper[4725]: I1002 11:45:35.451562 4725 generic.go:334] "Generic (PLEG): container finished" podID="dd00f5ac-14f8-47ba-8310-d6d279ffad6a" containerID="f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505" exitCode=0 Oct 02 11:45:35 crc kubenswrapper[4725]: I1002 11:45:35.451611 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dd00f5ac-14f8-47ba-8310-d6d279ffad6a","Type":"ContainerDied","Data":"f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505"} Oct 02 11:45:35 crc kubenswrapper[4725]: I1002 11:45:35.466324 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-98gqf-config-qjkdc" event={"ID":"3b448599-289a-4e48-b317-d3ded15badd2","Type":"ContainerStarted","Data":"20318190e9712d50e65e207c5c89bb0086328f99e3893e0a975ada92379c84e1"} Oct 02 11:45:35 crc kubenswrapper[4725]: I1002 11:45:35.466366 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-98gqf-config-qjkdc" event={"ID":"3b448599-289a-4e48-b317-d3ded15badd2","Type":"ContainerStarted","Data":"778947c00c8b8f09ae00589a0624a509145ed424b823d1eed7f0c7ffa3dbcdc6"} Oct 02 11:45:35 crc kubenswrapper[4725]: I1002 11:45:35.506257 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=49.771391578 podStartE2EDuration="57.506235041s" podCreationTimestamp="2025-10-02 11:44:38 +0000 UTC" firstStartedPulling="2025-10-02 11:44:51.82956472 +0000 UTC m=+1011.737064183" lastFinishedPulling="2025-10-02 11:44:59.564408183 +0000 UTC m=+1019.471907646" observedRunningTime="2025-10-02 11:45:35.473811326 +0000 UTC m=+1055.381310829" watchObservedRunningTime="2025-10-02 11:45:35.506235041 +0000 UTC m=+1055.413734494" Oct 02 11:45:35 crc kubenswrapper[4725]: I1002 11:45:35.518710 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-98gqf-config-qjkdc" podStartSLOduration=1.518692319 podStartE2EDuration="1.518692319s" podCreationTimestamp="2025-10-02 11:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:45:35.514489248 +0000 UTC m=+1055.421988711" watchObservedRunningTime="2025-10-02 11:45:35.518692319 +0000 UTC m=+1055.426191782" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.118513 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7jtpb" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.126295 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qskpf" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.234948 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn7jb\" (UniqueName: \"kubernetes.io/projected/9e2149e6-a89e-4464-aab7-4acc9b060ed3-kube-api-access-sn7jb\") pod \"9e2149e6-a89e-4464-aab7-4acc9b060ed3\" (UID: \"9e2149e6-a89e-4464-aab7-4acc9b060ed3\") " Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.235033 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmbmx\" (UniqueName: \"kubernetes.io/projected/844997fc-7d91-4705-8364-46690b714963-kube-api-access-jmbmx\") pod \"844997fc-7d91-4705-8364-46690b714963\" (UID: \"844997fc-7d91-4705-8364-46690b714963\") " Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.240920 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e2149e6-a89e-4464-aab7-4acc9b060ed3-kube-api-access-sn7jb" (OuterVolumeSpecName: "kube-api-access-sn7jb") pod "9e2149e6-a89e-4464-aab7-4acc9b060ed3" (UID: "9e2149e6-a89e-4464-aab7-4acc9b060ed3"). InnerVolumeSpecName "kube-api-access-sn7jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.241355 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/844997fc-7d91-4705-8364-46690b714963-kube-api-access-jmbmx" (OuterVolumeSpecName: "kube-api-access-jmbmx") pod "844997fc-7d91-4705-8364-46690b714963" (UID: "844997fc-7d91-4705-8364-46690b714963"). InnerVolumeSpecName "kube-api-access-jmbmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.337880 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn7jb\" (UniqueName: \"kubernetes.io/projected/9e2149e6-a89e-4464-aab7-4acc9b060ed3-kube-api-access-sn7jb\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.337934 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmbmx\" (UniqueName: \"kubernetes.io/projected/844997fc-7d91-4705-8364-46690b714963-kube-api-access-jmbmx\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.477446 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dd00f5ac-14f8-47ba-8310-d6d279ffad6a","Type":"ContainerStarted","Data":"ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2"} Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.477822 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.480886 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7jtpb" event={"ID":"844997fc-7d91-4705-8364-46690b714963","Type":"ContainerDied","Data":"e8a0b4571868e0aafbfb3f211dbb81a0c595fefedc6d0428551d3807b4f245c2"} Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.480913 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8a0b4571868e0aafbfb3f211dbb81a0c595fefedc6d0428551d3807b4f245c2" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.480954 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7jtpb" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.497248 4725 generic.go:334] "Generic (PLEG): container finished" podID="3b448599-289a-4e48-b317-d3ded15badd2" containerID="20318190e9712d50e65e207c5c89bb0086328f99e3893e0a975ada92379c84e1" exitCode=0 Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.497390 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-98gqf-config-qjkdc" event={"ID":"3b448599-289a-4e48-b317-d3ded15badd2","Type":"ContainerDied","Data":"20318190e9712d50e65e207c5c89bb0086328f99e3893e0a975ada92379c84e1"} Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.537332 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"f07268917b3f335b24aaac6d6f6c389999dc19319ece91dc4b220adf5e2b8c1a"} Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.537443 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"4eeb4af3abd10b28302b80f103383b953f9f514dd58e4b3b6c79c0cd94ecb311"} Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.542125 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qskpf" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.542264 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qskpf" event={"ID":"9e2149e6-a89e-4464-aab7-4acc9b060ed3","Type":"ContainerDied","Data":"bb82e8f8d057b08fcb40fc4dcc27b68bfc6cf8f27555e07aad5b99ad4a5e4cfd"} Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.542322 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb82e8f8d057b08fcb40fc4dcc27b68bfc6cf8f27555e07aad5b99ad4a5e4cfd" Oct 02 11:45:36 crc kubenswrapper[4725]: I1002 11:45:36.546187 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=49.281122748 podStartE2EDuration="58.54577585s" podCreationTimestamp="2025-10-02 11:44:38 +0000 UTC" firstStartedPulling="2025-10-02 11:44:50.930055491 +0000 UTC m=+1010.837554954" lastFinishedPulling="2025-10-02 11:45:00.194708593 +0000 UTC m=+1020.102208056" observedRunningTime="2025-10-02 11:45:36.524154451 +0000 UTC m=+1056.431653924" watchObservedRunningTime="2025-10-02 11:45:36.54577585 +0000 UTC m=+1056.453275333" Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.552628 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"b5384a3f642c6c939ac787ad520efd0c8c42fd7be1924b6582e6a4890c6826b3"} Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.553016 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"7faf9dd05d01af41f7984fd757774d8b5041df3ac9d24a6f46f0ce458e9bc5de"} Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.862904 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.973383 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-scripts\") pod \"3b448599-289a-4e48-b317-d3ded15badd2\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.973567 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run\") pod \"3b448599-289a-4e48-b317-d3ded15badd2\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.973596 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run-ovn\") pod \"3b448599-289a-4e48-b317-d3ded15badd2\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.973779 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3b448599-289a-4e48-b317-d3ded15badd2" (UID: "3b448599-289a-4e48-b317-d3ded15badd2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.973777 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run" (OuterVolumeSpecName: "var-run") pod "3b448599-289a-4e48-b317-d3ded15badd2" (UID: "3b448599-289a-4e48-b317-d3ded15badd2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.973713 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-log-ovn\") pod \"3b448599-289a-4e48-b317-d3ded15badd2\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.973894 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3b448599-289a-4e48-b317-d3ded15badd2" (UID: "3b448599-289a-4e48-b317-d3ded15badd2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.973950 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnpkk\" (UniqueName: \"kubernetes.io/projected/3b448599-289a-4e48-b317-d3ded15badd2-kube-api-access-xnpkk\") pod \"3b448599-289a-4e48-b317-d3ded15badd2\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.973996 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-additional-scripts\") pod \"3b448599-289a-4e48-b317-d3ded15badd2\" (UID: \"3b448599-289a-4e48-b317-d3ded15badd2\") " Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.974399 4725 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.974423 4725 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.974435 4725 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3b448599-289a-4e48-b317-d3ded15badd2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.974538 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-scripts" (OuterVolumeSpecName: "scripts") pod "3b448599-289a-4e48-b317-d3ded15badd2" (UID: "3b448599-289a-4e48-b317-d3ded15badd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.974563 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3b448599-289a-4e48-b317-d3ded15badd2" (UID: "3b448599-289a-4e48-b317-d3ded15badd2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:37 crc kubenswrapper[4725]: I1002 11:45:37.981950 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b448599-289a-4e48-b317-d3ded15badd2-kube-api-access-xnpkk" (OuterVolumeSpecName: "kube-api-access-xnpkk") pod "3b448599-289a-4e48-b317-d3ded15badd2" (UID: "3b448599-289a-4e48-b317-d3ded15badd2"). InnerVolumeSpecName "kube-api-access-xnpkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.075952 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnpkk\" (UniqueName: \"kubernetes.io/projected/3b448599-289a-4e48-b317-d3ded15badd2-kube-api-access-xnpkk\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.075984 4725 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.075995 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b448599-289a-4e48-b317-d3ded15badd2-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.409179 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1769-account-create-zwtgw"] Oct 02 11:45:38 crc kubenswrapper[4725]: E1002 11:45:38.409593 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e2149e6-a89e-4464-aab7-4acc9b060ed3" containerName="mariadb-database-create" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.409609 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e2149e6-a89e-4464-aab7-4acc9b060ed3" containerName="mariadb-database-create" Oct 02 11:45:38 crc kubenswrapper[4725]: E1002 11:45:38.409640 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844997fc-7d91-4705-8364-46690b714963" containerName="mariadb-database-create" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.409649 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="844997fc-7d91-4705-8364-46690b714963" containerName="mariadb-database-create" Oct 02 11:45:38 crc kubenswrapper[4725]: E1002 11:45:38.409667 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b448599-289a-4e48-b317-d3ded15badd2" containerName="ovn-config" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.409677 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b448599-289a-4e48-b317-d3ded15badd2" containerName="ovn-config" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.409900 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="844997fc-7d91-4705-8364-46690b714963" containerName="mariadb-database-create" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.409924 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e2149e6-a89e-4464-aab7-4acc9b060ed3" containerName="mariadb-database-create" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.409957 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b448599-289a-4e48-b317-d3ded15badd2" containerName="ovn-config" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.410826 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1769-account-create-zwtgw" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.419100 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.424229 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1769-account-create-zwtgw"] Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.567062 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-98gqf-config-qjkdc" event={"ID":"3b448599-289a-4e48-b317-d3ded15badd2","Type":"ContainerDied","Data":"778947c00c8b8f09ae00589a0624a509145ed424b823d1eed7f0c7ffa3dbcdc6"} Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.568168 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="778947c00c8b8f09ae00589a0624a509145ed424b823d1eed7f0c7ffa3dbcdc6" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.567160 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-98gqf-config-qjkdc" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.585236 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjszs\" (UniqueName: \"kubernetes.io/projected/784ffc31-f99b-4f32-9af1-45fc14371051-kube-api-access-cjszs\") pod \"glance-1769-account-create-zwtgw\" (UID: \"784ffc31-f99b-4f32-9af1-45fc14371051\") " pod="openstack/glance-1769-account-create-zwtgw" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.623095 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-98gqf-config-qjkdc"] Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.632488 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-98gqf-config-qjkdc"] Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.687392 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjszs\" (UniqueName: \"kubernetes.io/projected/784ffc31-f99b-4f32-9af1-45fc14371051-kube-api-access-cjszs\") pod \"glance-1769-account-create-zwtgw\" (UID: \"784ffc31-f99b-4f32-9af1-45fc14371051\") " pod="openstack/glance-1769-account-create-zwtgw" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.703634 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjszs\" (UniqueName: \"kubernetes.io/projected/784ffc31-f99b-4f32-9af1-45fc14371051-kube-api-access-cjszs\") pod \"glance-1769-account-create-zwtgw\" (UID: \"784ffc31-f99b-4f32-9af1-45fc14371051\") " pod="openstack/glance-1769-account-create-zwtgw" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.731250 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1769-account-create-zwtgw" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.746582 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-98gqf-config-txt6q"] Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.748217 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.760488 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.772319 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-98gqf-config-txt6q"] Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.893622 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnhsx\" (UniqueName: \"kubernetes.io/projected/c5a07949-6891-4844-acd9-3285f15c5642-kube-api-access-cnhsx\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.893963 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-additional-scripts\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.894031 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.894062 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-log-ovn\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.894150 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-scripts\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.894170 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run-ovn\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.995187 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-log-ovn\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.995320 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-scripts\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.995343 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run-ovn\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.995367 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnhsx\" (UniqueName: \"kubernetes.io/projected/c5a07949-6891-4844-acd9-3285f15c5642-kube-api-access-cnhsx\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.995385 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-additional-scripts\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.995429 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.995859 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-log-ovn\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.996337 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-additional-scripts\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.995862 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run-ovn\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.998120 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-scripts\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:38 crc kubenswrapper[4725]: I1002 11:45:38.998200 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.018291 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnhsx\" (UniqueName: \"kubernetes.io/projected/c5a07949-6891-4844-acd9-3285f15c5642-kube-api-access-cnhsx\") pod \"ovn-controller-98gqf-config-txt6q\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.075387 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.091987 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-98gqf" Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.285696 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b448599-289a-4e48-b317-d3ded15badd2" path="/var/lib/kubelet/pods/3b448599-289a-4e48-b317-d3ded15badd2/volumes" Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.291014 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1769-account-create-zwtgw"] Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.416116 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-98gqf-config-txt6q"] Oct 02 11:45:39 crc kubenswrapper[4725]: W1002 11:45:39.429384 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a07949_6891_4844_acd9_3285f15c5642.slice/crio-c2f60e14dde11d11a7d56aa1ec529402300750b79aed75a8447890c3cfa690c7 WatchSource:0}: Error finding container c2f60e14dde11d11a7d56aa1ec529402300750b79aed75a8447890c3cfa690c7: Status 404 returned error can't find the container with id c2f60e14dde11d11a7d56aa1ec529402300750b79aed75a8447890c3cfa690c7 Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.625116 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"74b16a0aad6ca46754337de1c738c8d621971c6f52b4161e0519f9b8b316fc07"} Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.625357 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"a335e8ac94ed3384a0707429bbea8f98b721ccc37c9c171cb4eda425c3c51bb0"} Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.625367 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"c96ecd2917299a0091c57ab46df935496434d921eede6dac24877a41d9a0eaf3"} Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.625376 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"dfe6440f2f6d63e8d22e397a0aab0f7b9c11c62ec596b642c0d446239330a90d"} Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.625384 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"b30581ca2788bd98e7a345b29e2f4717decb6d922eabb16557ca54b7d717e14d"} Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.626308 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-98gqf-config-txt6q" event={"ID":"c5a07949-6891-4844-acd9-3285f15c5642","Type":"ContainerStarted","Data":"c2f60e14dde11d11a7d56aa1ec529402300750b79aed75a8447890c3cfa690c7"} Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.627267 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1769-account-create-zwtgw" event={"ID":"784ffc31-f99b-4f32-9af1-45fc14371051","Type":"ContainerStarted","Data":"009f9875fc2c7b8074178d49cd0db0c965a6ca16e9bf395203ff32cdb203090e"} Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.627322 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1769-account-create-zwtgw" event={"ID":"784ffc31-f99b-4f32-9af1-45fc14371051","Type":"ContainerStarted","Data":"df55c42743b6074e87264d5f72f85cae2e4635725b97c1ede8a233e4b2515f0d"} Oct 02 11:45:39 crc kubenswrapper[4725]: I1002 11:45:39.911001 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 02 11:45:40 crc kubenswrapper[4725]: I1002 11:45:40.638046 4725 generic.go:334] "Generic (PLEG): container finished" podID="784ffc31-f99b-4f32-9af1-45fc14371051" containerID="009f9875fc2c7b8074178d49cd0db0c965a6ca16e9bf395203ff32cdb203090e" exitCode=0 Oct 02 11:45:40 crc kubenswrapper[4725]: I1002 11:45:40.638156 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1769-account-create-zwtgw" event={"ID":"784ffc31-f99b-4f32-9af1-45fc14371051","Type":"ContainerDied","Data":"009f9875fc2c7b8074178d49cd0db0c965a6ca16e9bf395203ff32cdb203090e"} Oct 02 11:45:40 crc kubenswrapper[4725]: I1002 11:45:40.645879 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"3df400fae4afbf989598115716e28a831f348731e6c7ac6513f9675baf303001"} Oct 02 11:45:40 crc kubenswrapper[4725]: I1002 11:45:40.645954 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e1fb73ad-22b0-46f9-a5c0-9faba5acb82d","Type":"ContainerStarted","Data":"44e0d1b396b37bbee80e1308da6a1de39ec2f4883fc505102191d50d9f9a7bd5"} Oct 02 11:45:40 crc kubenswrapper[4725]: I1002 11:45:40.648064 4725 generic.go:334] "Generic (PLEG): container finished" podID="c5a07949-6891-4844-acd9-3285f15c5642" containerID="57ac9da0dbd1635fa7b9e46db2775869f5f26574fba5c114e5f4b5b49e01678d" exitCode=0 Oct 02 11:45:40 crc kubenswrapper[4725]: I1002 11:45:40.648103 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-98gqf-config-txt6q" event={"ID":"c5a07949-6891-4844-acd9-3285f15c5642","Type":"ContainerDied","Data":"57ac9da0dbd1635fa7b9e46db2775869f5f26574fba5c114e5f4b5b49e01678d"} Oct 02 11:45:40 crc kubenswrapper[4725]: I1002 11:45:40.706178 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.587467635 podStartE2EDuration="26.706154929s" podCreationTimestamp="2025-10-02 11:45:14 +0000 UTC" firstStartedPulling="2025-10-02 11:45:32.353661836 +0000 UTC m=+1052.261161299" lastFinishedPulling="2025-10-02 11:45:38.47234913 +0000 UTC m=+1058.379848593" observedRunningTime="2025-10-02 11:45:40.701435544 +0000 UTC m=+1060.608935017" watchObservedRunningTime="2025-10-02 11:45:40.706154929 +0000 UTC m=+1060.613654412" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.001167 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qmc5z"] Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.002597 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.004776 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.019564 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qmc5z"] Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.024440 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1769-account-create-zwtgw" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.146760 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjszs\" (UniqueName: \"kubernetes.io/projected/784ffc31-f99b-4f32-9af1-45fc14371051-kube-api-access-cjszs\") pod \"784ffc31-f99b-4f32-9af1-45fc14371051\" (UID: \"784ffc31-f99b-4f32-9af1-45fc14371051\") " Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.146984 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-config\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.147025 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.147074 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnrdt\" (UniqueName: \"kubernetes.io/projected/c80ad4a4-1047-472a-b39f-96ccebce9c00-kube-api-access-vnrdt\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.147459 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.147594 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.147714 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.153902 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784ffc31-f99b-4f32-9af1-45fc14371051-kube-api-access-cjszs" (OuterVolumeSpecName: "kube-api-access-cjszs") pod "784ffc31-f99b-4f32-9af1-45fc14371051" (UID: "784ffc31-f99b-4f32-9af1-45fc14371051"). InnerVolumeSpecName "kube-api-access-cjszs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.249293 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.249370 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.249405 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-config\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.249438 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.249485 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnrdt\" (UniqueName: \"kubernetes.io/projected/c80ad4a4-1047-472a-b39f-96ccebce9c00-kube-api-access-vnrdt\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.249517 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.249572 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjszs\" (UniqueName: \"kubernetes.io/projected/784ffc31-f99b-4f32-9af1-45fc14371051-kube-api-access-cjszs\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.250509 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.250554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-config\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.250592 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.250555 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.251274 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.275026 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnrdt\" (UniqueName: \"kubernetes.io/projected/c80ad4a4-1047-472a-b39f-96ccebce9c00-kube-api-access-vnrdt\") pod \"dnsmasq-dns-77585f5f8c-qmc5z\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.334668 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.603863 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qmc5z"] Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.656390 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" event={"ID":"c80ad4a4-1047-472a-b39f-96ccebce9c00","Type":"ContainerStarted","Data":"d6757bce8b8f87ed6b3d1b44afe7c182fd985407fce65df5e84e8c4d975f2ea0"} Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.660010 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1769-account-create-zwtgw" event={"ID":"784ffc31-f99b-4f32-9af1-45fc14371051","Type":"ContainerDied","Data":"df55c42743b6074e87264d5f72f85cae2e4635725b97c1ede8a233e4b2515f0d"} Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.660075 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df55c42743b6074e87264d5f72f85cae2e4635725b97c1ede8a233e4b2515f0d" Oct 02 11:45:41 crc kubenswrapper[4725]: I1002 11:45:41.660190 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1769-account-create-zwtgw" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.065915 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.166919 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run\") pod \"c5a07949-6891-4844-acd9-3285f15c5642\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.167041 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-scripts\") pod \"c5a07949-6891-4844-acd9-3285f15c5642\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.167062 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run-ovn\") pod \"c5a07949-6891-4844-acd9-3285f15c5642\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.167131 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnhsx\" (UniqueName: \"kubernetes.io/projected/c5a07949-6891-4844-acd9-3285f15c5642-kube-api-access-cnhsx\") pod \"c5a07949-6891-4844-acd9-3285f15c5642\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.167204 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-log-ovn\") pod \"c5a07949-6891-4844-acd9-3285f15c5642\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.167234 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-additional-scripts\") pod \"c5a07949-6891-4844-acd9-3285f15c5642\" (UID: \"c5a07949-6891-4844-acd9-3285f15c5642\") " Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.167865 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run" (OuterVolumeSpecName: "var-run") pod "c5a07949-6891-4844-acd9-3285f15c5642" (UID: "c5a07949-6891-4844-acd9-3285f15c5642"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.168190 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c5a07949-6891-4844-acd9-3285f15c5642" (UID: "c5a07949-6891-4844-acd9-3285f15c5642"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.168611 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-scripts" (OuterVolumeSpecName: "scripts") pod "c5a07949-6891-4844-acd9-3285f15c5642" (UID: "c5a07949-6891-4844-acd9-3285f15c5642"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.168649 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c5a07949-6891-4844-acd9-3285f15c5642" (UID: "c5a07949-6891-4844-acd9-3285f15c5642"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.168676 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c5a07949-6891-4844-acd9-3285f15c5642" (UID: "c5a07949-6891-4844-acd9-3285f15c5642"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.173964 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a07949-6891-4844-acd9-3285f15c5642-kube-api-access-cnhsx" (OuterVolumeSpecName: "kube-api-access-cnhsx") pod "c5a07949-6891-4844-acd9-3285f15c5642" (UID: "c5a07949-6891-4844-acd9-3285f15c5642"). InnerVolumeSpecName "kube-api-access-cnhsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.268740 4725 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.268779 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.268791 4725 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.268804 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnhsx\" (UniqueName: \"kubernetes.io/projected/c5a07949-6891-4844-acd9-3285f15c5642-kube-api-access-cnhsx\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.268817 4725 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a07949-6891-4844-acd9-3285f15c5642-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.268827 4725 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a07949-6891-4844-acd9-3285f15c5642-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.669968 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-98gqf-config-txt6q" event={"ID":"c5a07949-6891-4844-acd9-3285f15c5642","Type":"ContainerDied","Data":"c2f60e14dde11d11a7d56aa1ec529402300750b79aed75a8447890c3cfa690c7"} Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.670023 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f60e14dde11d11a7d56aa1ec529402300750b79aed75a8447890c3cfa690c7" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.669989 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-98gqf-config-txt6q" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.673603 4725 generic.go:334] "Generic (PLEG): container finished" podID="c80ad4a4-1047-472a-b39f-96ccebce9c00" containerID="313fd03142575c990b2e4810b0f139c1ed68b54a82742007d8dd7e07c39cc312" exitCode=0 Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.673704 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" event={"ID":"c80ad4a4-1047-472a-b39f-96ccebce9c00","Type":"ContainerDied","Data":"313fd03142575c990b2e4810b0f139c1ed68b54a82742007d8dd7e07c39cc312"} Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.701791 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2adf-account-create-gdttl"] Oct 02 11:45:42 crc kubenswrapper[4725]: E1002 11:45:42.703247 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784ffc31-f99b-4f32-9af1-45fc14371051" containerName="mariadb-account-create" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.703272 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="784ffc31-f99b-4f32-9af1-45fc14371051" containerName="mariadb-account-create" Oct 02 11:45:42 crc kubenswrapper[4725]: E1002 11:45:42.703290 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a07949-6891-4844-acd9-3285f15c5642" containerName="ovn-config" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.703296 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a07949-6891-4844-acd9-3285f15c5642" containerName="ovn-config" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.703440 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="784ffc31-f99b-4f32-9af1-45fc14371051" containerName="mariadb-account-create" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.703477 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a07949-6891-4844-acd9-3285f15c5642" containerName="ovn-config" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.704190 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2adf-account-create-gdttl" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.708982 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.719896 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2adf-account-create-gdttl"] Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.777625 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9tkc\" (UniqueName: \"kubernetes.io/projected/b24b56cf-ce0c-4703-8a7e-559732d3912e-kube-api-access-r9tkc\") pod \"keystone-2adf-account-create-gdttl\" (UID: \"b24b56cf-ce0c-4703-8a7e-559732d3912e\") " pod="openstack/keystone-2adf-account-create-gdttl" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.879355 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9tkc\" (UniqueName: \"kubernetes.io/projected/b24b56cf-ce0c-4703-8a7e-559732d3912e-kube-api-access-r9tkc\") pod \"keystone-2adf-account-create-gdttl\" (UID: \"b24b56cf-ce0c-4703-8a7e-559732d3912e\") " pod="openstack/keystone-2adf-account-create-gdttl" Oct 02 11:45:42 crc kubenswrapper[4725]: I1002 11:45:42.896523 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9tkc\" (UniqueName: \"kubernetes.io/projected/b24b56cf-ce0c-4703-8a7e-559732d3912e-kube-api-access-r9tkc\") pod \"keystone-2adf-account-create-gdttl\" (UID: \"b24b56cf-ce0c-4703-8a7e-559732d3912e\") " pod="openstack/keystone-2adf-account-create-gdttl" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.042149 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2adf-account-create-gdttl" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.103428 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2cfd-account-create-cpr2m"] Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.104681 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2cfd-account-create-cpr2m" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.106559 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.134495 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2cfd-account-create-cpr2m"] Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.167359 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-98gqf-config-txt6q"] Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.193288 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-98gqf-config-txt6q"] Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.197173 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vrbv\" (UniqueName: \"kubernetes.io/projected/a69787ae-b716-4ed5-9a16-5bc50c5e0dc7-kube-api-access-6vrbv\") pod \"placement-2cfd-account-create-cpr2m\" (UID: \"a69787ae-b716-4ed5-9a16-5bc50c5e0dc7\") " pod="openstack/placement-2cfd-account-create-cpr2m" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.283254 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a07949-6891-4844-acd9-3285f15c5642" path="/var/lib/kubelet/pods/c5a07949-6891-4844-acd9-3285f15c5642/volumes" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.299362 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vrbv\" (UniqueName: \"kubernetes.io/projected/a69787ae-b716-4ed5-9a16-5bc50c5e0dc7-kube-api-access-6vrbv\") pod \"placement-2cfd-account-create-cpr2m\" (UID: \"a69787ae-b716-4ed5-9a16-5bc50c5e0dc7\") " pod="openstack/placement-2cfd-account-create-cpr2m" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.331177 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vrbv\" (UniqueName: \"kubernetes.io/projected/a69787ae-b716-4ed5-9a16-5bc50c5e0dc7-kube-api-access-6vrbv\") pod \"placement-2cfd-account-create-cpr2m\" (UID: \"a69787ae-b716-4ed5-9a16-5bc50c5e0dc7\") " pod="openstack/placement-2cfd-account-create-cpr2m" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.479313 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2cfd-account-create-cpr2m" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.553134 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2adf-account-create-gdttl"] Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.630964 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9gmkp"] Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.632443 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.634622 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.635533 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jwrxm" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.642601 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9gmkp"] Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.683183 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2adf-account-create-gdttl" event={"ID":"b24b56cf-ce0c-4703-8a7e-559732d3912e","Type":"ContainerStarted","Data":"ec82796b4127a2a1a0560df51de5fd3fda06dec30b4c382a59829d01bbe0e987"} Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.685586 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" event={"ID":"c80ad4a4-1047-472a-b39f-96ccebce9c00","Type":"ContainerStarted","Data":"6789ec144c802a83a978ecc4f56811e6415e0646439ba8b5c86c89e571c8c2fc"} Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.685795 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.709914 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" podStartSLOduration=3.709896181 podStartE2EDuration="3.709896181s" podCreationTimestamp="2025-10-02 11:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:45:43.702816154 +0000 UTC m=+1063.610315667" watchObservedRunningTime="2025-10-02 11:45:43.709896181 +0000 UTC m=+1063.617395644" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.807466 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-db-sync-config-data\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.807606 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-config-data\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.807624 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-combined-ca-bundle\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.807666 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbmxf\" (UniqueName: \"kubernetes.io/projected/b90074e8-8d41-4fb8-98b8-4d202a69c345-kube-api-access-fbmxf\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.909336 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-config-data\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.909378 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-combined-ca-bundle\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.909428 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbmxf\" (UniqueName: \"kubernetes.io/projected/b90074e8-8d41-4fb8-98b8-4d202a69c345-kube-api-access-fbmxf\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.909484 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-db-sync-config-data\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.914885 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-config-data\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.918385 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-db-sync-config-data\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.920265 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-combined-ca-bundle\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.930272 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbmxf\" (UniqueName: \"kubernetes.io/projected/b90074e8-8d41-4fb8-98b8-4d202a69c345-kube-api-access-fbmxf\") pod \"glance-db-sync-9gmkp\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:43 crc kubenswrapper[4725]: I1002 11:45:43.934124 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2cfd-account-create-cpr2m"] Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.039655 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9gmkp" Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.548656 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9gmkp"] Oct 02 11:45:44 crc kubenswrapper[4725]: W1002 11:45:44.552532 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb90074e8_8d41_4fb8_98b8_4d202a69c345.slice/crio-ba42b04ac44e5d920664b0df631da9f784855ee8cfaa5332bd3362b889b82619 WatchSource:0}: Error finding container ba42b04ac44e5d920664b0df631da9f784855ee8cfaa5332bd3362b889b82619: Status 404 returned error can't find the container with id ba42b04ac44e5d920664b0df631da9f784855ee8cfaa5332bd3362b889b82619 Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.699084 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9gmkp" event={"ID":"b90074e8-8d41-4fb8-98b8-4d202a69c345","Type":"ContainerStarted","Data":"ba42b04ac44e5d920664b0df631da9f784855ee8cfaa5332bd3362b889b82619"} Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.701962 4725 generic.go:334] "Generic (PLEG): container finished" podID="a69787ae-b716-4ed5-9a16-5bc50c5e0dc7" containerID="d87d4fdbbda1a5124e64b258ee1d73de2500ae3d4b6b0fa07f6420263e046936" exitCode=0 Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.702416 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2cfd-account-create-cpr2m" event={"ID":"a69787ae-b716-4ed5-9a16-5bc50c5e0dc7","Type":"ContainerDied","Data":"d87d4fdbbda1a5124e64b258ee1d73de2500ae3d4b6b0fa07f6420263e046936"} Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.702627 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2cfd-account-create-cpr2m" event={"ID":"a69787ae-b716-4ed5-9a16-5bc50c5e0dc7","Type":"ContainerStarted","Data":"7e9d7a5c010516208988ce0356fd5beec529dfa44d4a3b1478912f303d2c506f"} Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.706073 4725 generic.go:334] "Generic (PLEG): container finished" podID="b24b56cf-ce0c-4703-8a7e-559732d3912e" containerID="9ec4afa729ab9fcb6c23d28d960807526e3562e45fe992f8472fc03903a370fa" exitCode=0 Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.706283 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2adf-account-create-gdttl" event={"ID":"b24b56cf-ce0c-4703-8a7e-559732d3912e","Type":"ContainerDied","Data":"9ec4afa729ab9fcb6c23d28d960807526e3562e45fe992f8472fc03903a370fa"} Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.978776 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.978878 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.978940 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.979892 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8e953302b3d29f33ec1243f1fa79e72a6d8548353e0303bbf9f533c0d55729e"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 11:45:44 crc kubenswrapper[4725]: I1002 11:45:44.979991 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://f8e953302b3d29f33ec1243f1fa79e72a6d8548353e0303bbf9f533c0d55729e" gracePeriod=600 Oct 02 11:45:45 crc kubenswrapper[4725]: I1002 11:45:45.717368 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="f8e953302b3d29f33ec1243f1fa79e72a6d8548353e0303bbf9f533c0d55729e" exitCode=0 Oct 02 11:45:45 crc kubenswrapper[4725]: I1002 11:45:45.717463 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"f8e953302b3d29f33ec1243f1fa79e72a6d8548353e0303bbf9f533c0d55729e"} Oct 02 11:45:45 crc kubenswrapper[4725]: I1002 11:45:45.718019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"3af7e2353d25f82db2912213e59998b17803f7721dc32c996865e9d5b9f6bacc"} Oct 02 11:45:45 crc kubenswrapper[4725]: I1002 11:45:45.718042 4725 scope.go:117] "RemoveContainer" containerID="0c35ab14b7cbe7958f83e521978b2a31e1c8daa5f2440954a4dc746628c5674e" Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.116589 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2adf-account-create-gdttl" Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.119583 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2cfd-account-create-cpr2m" Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.253607 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9tkc\" (UniqueName: \"kubernetes.io/projected/b24b56cf-ce0c-4703-8a7e-559732d3912e-kube-api-access-r9tkc\") pod \"b24b56cf-ce0c-4703-8a7e-559732d3912e\" (UID: \"b24b56cf-ce0c-4703-8a7e-559732d3912e\") " Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.253740 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vrbv\" (UniqueName: \"kubernetes.io/projected/a69787ae-b716-4ed5-9a16-5bc50c5e0dc7-kube-api-access-6vrbv\") pod \"a69787ae-b716-4ed5-9a16-5bc50c5e0dc7\" (UID: \"a69787ae-b716-4ed5-9a16-5bc50c5e0dc7\") " Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.259863 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a69787ae-b716-4ed5-9a16-5bc50c5e0dc7-kube-api-access-6vrbv" (OuterVolumeSpecName: "kube-api-access-6vrbv") pod "a69787ae-b716-4ed5-9a16-5bc50c5e0dc7" (UID: "a69787ae-b716-4ed5-9a16-5bc50c5e0dc7"). InnerVolumeSpecName "kube-api-access-6vrbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.259956 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24b56cf-ce0c-4703-8a7e-559732d3912e-kube-api-access-r9tkc" (OuterVolumeSpecName: "kube-api-access-r9tkc") pod "b24b56cf-ce0c-4703-8a7e-559732d3912e" (UID: "b24b56cf-ce0c-4703-8a7e-559732d3912e"). InnerVolumeSpecName "kube-api-access-r9tkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.355136 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9tkc\" (UniqueName: \"kubernetes.io/projected/b24b56cf-ce0c-4703-8a7e-559732d3912e-kube-api-access-r9tkc\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.355269 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vrbv\" (UniqueName: \"kubernetes.io/projected/a69787ae-b716-4ed5-9a16-5bc50c5e0dc7-kube-api-access-6vrbv\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.732791 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2adf-account-create-gdttl" event={"ID":"b24b56cf-ce0c-4703-8a7e-559732d3912e","Type":"ContainerDied","Data":"ec82796b4127a2a1a0560df51de5fd3fda06dec30b4c382a59829d01bbe0e987"} Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.733142 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec82796b4127a2a1a0560df51de5fd3fda06dec30b4c382a59829d01bbe0e987" Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.733031 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2adf-account-create-gdttl" Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.740905 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2cfd-account-create-cpr2m" event={"ID":"a69787ae-b716-4ed5-9a16-5bc50c5e0dc7","Type":"ContainerDied","Data":"7e9d7a5c010516208988ce0356fd5beec529dfa44d4a3b1478912f303d2c506f"} Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.740944 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e9d7a5c010516208988ce0356fd5beec529dfa44d4a3b1478912f303d2c506f" Oct 02 11:45:46 crc kubenswrapper[4725]: I1002 11:45:46.740965 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2cfd-account-create-cpr2m" Oct 02 11:45:49 crc kubenswrapper[4725]: I1002 11:45:49.387177 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:45:49 crc kubenswrapper[4725]: I1002 11:45:49.674782 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.062821 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dzstm"] Oct 02 11:45:51 crc kubenswrapper[4725]: E1002 11:45:51.064314 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a69787ae-b716-4ed5-9a16-5bc50c5e0dc7" containerName="mariadb-account-create" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.064400 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69787ae-b716-4ed5-9a16-5bc50c5e0dc7" containerName="mariadb-account-create" Oct 02 11:45:51 crc kubenswrapper[4725]: E1002 11:45:51.064462 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24b56cf-ce0c-4703-8a7e-559732d3912e" containerName="mariadb-account-create" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.064524 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24b56cf-ce0c-4703-8a7e-559732d3912e" containerName="mariadb-account-create" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.064787 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a69787ae-b716-4ed5-9a16-5bc50c5e0dc7" containerName="mariadb-account-create" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.064906 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24b56cf-ce0c-4703-8a7e-559732d3912e" containerName="mariadb-account-create" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.065515 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dzstm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.094129 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dzstm"] Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.145302 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjnmw\" (UniqueName: \"kubernetes.io/projected/76616a88-ee14-40ad-98e3-70a436c064e3-kube-api-access-fjnmw\") pod \"cinder-db-create-dzstm\" (UID: \"76616a88-ee14-40ad-98e3-70a436c064e3\") " pod="openstack/cinder-db-create-dzstm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.168426 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-g5mdv"] Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.169781 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g5mdv" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.180541 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-g5mdv"] Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.246621 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjnmw\" (UniqueName: \"kubernetes.io/projected/76616a88-ee14-40ad-98e3-70a436c064e3-kube-api-access-fjnmw\") pod \"cinder-db-create-dzstm\" (UID: \"76616a88-ee14-40ad-98e3-70a436c064e3\") " pod="openstack/cinder-db-create-dzstm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.246775 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5xg5\" (UniqueName: \"kubernetes.io/projected/77c89bd4-a34a-46e2-9d31-13e0e657c6d3-kube-api-access-g5xg5\") pod \"barbican-db-create-g5mdv\" (UID: \"77c89bd4-a34a-46e2-9d31-13e0e657c6d3\") " pod="openstack/barbican-db-create-g5mdv" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.268544 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjnmw\" (UniqueName: \"kubernetes.io/projected/76616a88-ee14-40ad-98e3-70a436c064e3-kube-api-access-fjnmw\") pod \"cinder-db-create-dzstm\" (UID: \"76616a88-ee14-40ad-98e3-70a436c064e3\") " pod="openstack/cinder-db-create-dzstm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.291076 4725 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod5cb7de96-e482-43c5-ba1f-7d2532e1d516"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod5cb7de96-e482-43c5-ba1f-7d2532e1d516] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5cb7de96_e482_43c5_ba1f_7d2532e1d516.slice" Oct 02 11:45:51 crc kubenswrapper[4725]: E1002 11:45:51.291122 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod5cb7de96-e482-43c5-ba1f-7d2532e1d516] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod5cb7de96-e482-43c5-ba1f-7d2532e1d516] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5cb7de96_e482_43c5_ba1f_7d2532e1d516.slice" pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" podUID="5cb7de96-e482-43c5-ba1f-7d2532e1d516" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.336999 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.348951 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5xg5\" (UniqueName: \"kubernetes.io/projected/77c89bd4-a34a-46e2-9d31-13e0e657c6d3-kube-api-access-g5xg5\") pod \"barbican-db-create-g5mdv\" (UID: \"77c89bd4-a34a-46e2-9d31-13e0e657c6d3\") " pod="openstack/barbican-db-create-g5mdv" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.362275 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4994j"] Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.363359 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4994j" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.372107 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5xg5\" (UniqueName: \"kubernetes.io/projected/77c89bd4-a34a-46e2-9d31-13e0e657c6d3-kube-api-access-g5xg5\") pod \"barbican-db-create-g5mdv\" (UID: \"77c89bd4-a34a-46e2-9d31-13e0e657c6d3\") " pod="openstack/barbican-db-create-g5mdv" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.390114 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dzstm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.432671 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4994j"] Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.449364 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2n9b7"] Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.449665 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-2n9b7" podUID="ce83dde7-78f3-4d87-8656-61dd112db89e" containerName="dnsmasq-dns" containerID="cri-o://cc3da6add29aa07ed3702e814aa85e3574b8a657a02e9d2e63051e0e2be3df5a" gracePeriod=10 Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.451770 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7754\" (UniqueName: \"kubernetes.io/projected/59c4bc8e-ed79-4ec9-81bc-862adbdf3f44-kube-api-access-b7754\") pod \"neutron-db-create-4994j\" (UID: \"59c4bc8e-ed79-4ec9-81bc-862adbdf3f44\") " pod="openstack/neutron-db-create-4994j" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.489776 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-kkjvm"] Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.491270 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.492846 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.493281 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.493738 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.494023 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zwkvx" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.499760 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g5mdv" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.501014 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kkjvm"] Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.553053 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-config-data\") pod \"keystone-db-sync-kkjvm\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.553146 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-combined-ca-bundle\") pod \"keystone-db-sync-kkjvm\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.553197 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btd47\" (UniqueName: \"kubernetes.io/projected/308dd5cf-d967-4ffc-821d-e94014a85ddd-kube-api-access-btd47\") pod \"keystone-db-sync-kkjvm\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.553226 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7754\" (UniqueName: \"kubernetes.io/projected/59c4bc8e-ed79-4ec9-81bc-862adbdf3f44-kube-api-access-b7754\") pod \"neutron-db-create-4994j\" (UID: \"59c4bc8e-ed79-4ec9-81bc-862adbdf3f44\") " pod="openstack/neutron-db-create-4994j" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.575350 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7754\" (UniqueName: \"kubernetes.io/projected/59c4bc8e-ed79-4ec9-81bc-862adbdf3f44-kube-api-access-b7754\") pod \"neutron-db-create-4994j\" (UID: \"59c4bc8e-ed79-4ec9-81bc-862adbdf3f44\") " pod="openstack/neutron-db-create-4994j" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.654398 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btd47\" (UniqueName: \"kubernetes.io/projected/308dd5cf-d967-4ffc-821d-e94014a85ddd-kube-api-access-btd47\") pod \"keystone-db-sync-kkjvm\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.654488 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-config-data\") pod \"keystone-db-sync-kkjvm\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.654560 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-combined-ca-bundle\") pod \"keystone-db-sync-kkjvm\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.661447 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-combined-ca-bundle\") pod \"keystone-db-sync-kkjvm\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.662547 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-config-data\") pod \"keystone-db-sync-kkjvm\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.688472 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btd47\" (UniqueName: \"kubernetes.io/projected/308dd5cf-d967-4ffc-821d-e94014a85ddd-kube-api-access-btd47\") pod \"keystone-db-sync-kkjvm\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.721929 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4994j" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.795563 4725 generic.go:334] "Generic (PLEG): container finished" podID="ce83dde7-78f3-4d87-8656-61dd112db89e" containerID="cc3da6add29aa07ed3702e814aa85e3574b8a657a02e9d2e63051e0e2be3df5a" exitCode=0 Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.795673 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8l7px" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.796080 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2n9b7" event={"ID":"ce83dde7-78f3-4d87-8656-61dd112db89e","Type":"ContainerDied","Data":"cc3da6add29aa07ed3702e814aa85e3574b8a657a02e9d2e63051e0e2be3df5a"} Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.814634 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.863788 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l7px"] Oct 02 11:45:51 crc kubenswrapper[4725]: I1002 11:45:51.877887 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l7px"] Oct 02 11:45:53 crc kubenswrapper[4725]: I1002 11:45:53.281025 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cb7de96-e482-43c5-ba1f-7d2532e1d516" path="/var/lib/kubelet/pods/5cb7de96-e482-43c5-ba1f-7d2532e1d516/volumes" Oct 02 11:45:55 crc kubenswrapper[4725]: I1002 11:45:55.110646 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-2n9b7" podUID="ce83dde7-78f3-4d87-8656-61dd112db89e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.129134 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.276131 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-config\") pod \"ce83dde7-78f3-4d87-8656-61dd112db89e\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.276526 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-nb\") pod \"ce83dde7-78f3-4d87-8656-61dd112db89e\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.276568 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-dns-svc\") pod \"ce83dde7-78f3-4d87-8656-61dd112db89e\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.276590 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gch27\" (UniqueName: \"kubernetes.io/projected/ce83dde7-78f3-4d87-8656-61dd112db89e-kube-api-access-gch27\") pod \"ce83dde7-78f3-4d87-8656-61dd112db89e\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.276624 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-sb\") pod \"ce83dde7-78f3-4d87-8656-61dd112db89e\" (UID: \"ce83dde7-78f3-4d87-8656-61dd112db89e\") " Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.284379 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce83dde7-78f3-4d87-8656-61dd112db89e-kube-api-access-gch27" (OuterVolumeSpecName: "kube-api-access-gch27") pod "ce83dde7-78f3-4d87-8656-61dd112db89e" (UID: "ce83dde7-78f3-4d87-8656-61dd112db89e"). InnerVolumeSpecName "kube-api-access-gch27". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.321104 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce83dde7-78f3-4d87-8656-61dd112db89e" (UID: "ce83dde7-78f3-4d87-8656-61dd112db89e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.326973 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce83dde7-78f3-4d87-8656-61dd112db89e" (UID: "ce83dde7-78f3-4d87-8656-61dd112db89e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.328937 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce83dde7-78f3-4d87-8656-61dd112db89e" (UID: "ce83dde7-78f3-4d87-8656-61dd112db89e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.352453 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-config" (OuterVolumeSpecName: "config") pod "ce83dde7-78f3-4d87-8656-61dd112db89e" (UID: "ce83dde7-78f3-4d87-8656-61dd112db89e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.378961 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.379006 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.379019 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gch27\" (UniqueName: \"kubernetes.io/projected/ce83dde7-78f3-4d87-8656-61dd112db89e-kube-api-access-gch27\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.379036 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.379049 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce83dde7-78f3-4d87-8656-61dd112db89e-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.507073 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dzstm"] Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.513161 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-g5mdv"] Oct 02 11:45:57 crc kubenswrapper[4725]: W1002 11:45:57.532384 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77c89bd4_a34a_46e2_9d31_13e0e657c6d3.slice/crio-6e9ca14c265c301353d6bd65199927dbdda24df60846692d6e72e24424c62934 WatchSource:0}: Error finding container 6e9ca14c265c301353d6bd65199927dbdda24df60846692d6e72e24424c62934: Status 404 returned error can't find the container with id 6e9ca14c265c301353d6bd65199927dbdda24df60846692d6e72e24424c62934 Oct 02 11:45:57 crc kubenswrapper[4725]: W1002 11:45:57.533056 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76616a88_ee14_40ad_98e3_70a436c064e3.slice/crio-779eac0aff36f9bb3a6a455b865b14defacd655f3f7c954e201077c240044ed1 WatchSource:0}: Error finding container 779eac0aff36f9bb3a6a455b865b14defacd655f3f7c954e201077c240044ed1: Status 404 returned error can't find the container with id 779eac0aff36f9bb3a6a455b865b14defacd655f3f7c954e201077c240044ed1 Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.616403 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kkjvm"] Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.623341 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4994j"] Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.857414 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2n9b7" event={"ID":"ce83dde7-78f3-4d87-8656-61dd112db89e","Type":"ContainerDied","Data":"4a09a02bb55caec7346b8fa4218c8ea2e6d400798e59b63a33f2d7428104c1a5"} Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.857480 4725 scope.go:117] "RemoveContainer" containerID="cc3da6add29aa07ed3702e814aa85e3574b8a657a02e9d2e63051e0e2be3df5a" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.857621 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2n9b7" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.870158 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9gmkp" event={"ID":"b90074e8-8d41-4fb8-98b8-4d202a69c345","Type":"ContainerStarted","Data":"191dfccb66de35bd8f91802f9c60a6c76a2fc654e736b846da95d29b20d4b48b"} Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.874778 4725 generic.go:334] "Generic (PLEG): container finished" podID="76616a88-ee14-40ad-98e3-70a436c064e3" containerID="91226ef91f2cf0066ad144b00076b35cf57578980278c2a27e40787c583bbffa" exitCode=0 Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.874840 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dzstm" event={"ID":"76616a88-ee14-40ad-98e3-70a436c064e3","Type":"ContainerDied","Data":"91226ef91f2cf0066ad144b00076b35cf57578980278c2a27e40787c583bbffa"} Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.874892 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dzstm" event={"ID":"76616a88-ee14-40ad-98e3-70a436c064e3","Type":"ContainerStarted","Data":"779eac0aff36f9bb3a6a455b865b14defacd655f3f7c954e201077c240044ed1"} Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.885031 4725 generic.go:334] "Generic (PLEG): container finished" podID="77c89bd4-a34a-46e2-9d31-13e0e657c6d3" containerID="d4d5ae8b1418a8c9236c862c22bb2b3f602013365a79ec84e61eec06623e5fba" exitCode=0 Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.886113 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g5mdv" event={"ID":"77c89bd4-a34a-46e2-9d31-13e0e657c6d3","Type":"ContainerDied","Data":"d4d5ae8b1418a8c9236c862c22bb2b3f602013365a79ec84e61eec06623e5fba"} Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.886166 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g5mdv" event={"ID":"77c89bd4-a34a-46e2-9d31-13e0e657c6d3","Type":"ContainerStarted","Data":"6e9ca14c265c301353d6bd65199927dbdda24df60846692d6e72e24424c62934"} Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.890863 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9gmkp" podStartSLOduration=2.386857741 podStartE2EDuration="14.890844999s" podCreationTimestamp="2025-10-02 11:45:43 +0000 UTC" firstStartedPulling="2025-10-02 11:45:44.556367251 +0000 UTC m=+1064.463866714" lastFinishedPulling="2025-10-02 11:45:57.060354509 +0000 UTC m=+1076.967853972" observedRunningTime="2025-10-02 11:45:57.886197347 +0000 UTC m=+1077.793696810" watchObservedRunningTime="2025-10-02 11:45:57.890844999 +0000 UTC m=+1077.798344462" Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.896033 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4994j" event={"ID":"59c4bc8e-ed79-4ec9-81bc-862adbdf3f44","Type":"ContainerStarted","Data":"c00cc9fa5254b4700e1f165534a20384f99f4dce1b1cba27d73499c3c25c6c4f"} Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.896089 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4994j" event={"ID":"59c4bc8e-ed79-4ec9-81bc-862adbdf3f44","Type":"ContainerStarted","Data":"9313a67285cd191cc5143c929caf8e263902963a08d7ae040974bc643f80d95e"} Oct 02 11:45:57 crc kubenswrapper[4725]: I1002 11:45:57.901249 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kkjvm" event={"ID":"308dd5cf-d967-4ffc-821d-e94014a85ddd","Type":"ContainerStarted","Data":"e7530d37a3a20b08f1085deccdb956ecd3b6db370e088a90f95d177a204c12fb"} Oct 02 11:45:58 crc kubenswrapper[4725]: I1002 11:45:58.035008 4725 scope.go:117] "RemoveContainer" containerID="ca2aaa98998be115d2d6ae7d1fa88df4d3148cf6f4b96d1dbbf3adda51a895d3" Oct 02 11:45:58 crc kubenswrapper[4725]: I1002 11:45:58.035859 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2n9b7"] Oct 02 11:45:58 crc kubenswrapper[4725]: I1002 11:45:58.044193 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2n9b7"] Oct 02 11:45:58 crc kubenswrapper[4725]: I1002 11:45:58.927502 4725 generic.go:334] "Generic (PLEG): container finished" podID="59c4bc8e-ed79-4ec9-81bc-862adbdf3f44" containerID="c00cc9fa5254b4700e1f165534a20384f99f4dce1b1cba27d73499c3c25c6c4f" exitCode=0 Oct 02 11:45:58 crc kubenswrapper[4725]: I1002 11:45:58.927555 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4994j" event={"ID":"59c4bc8e-ed79-4ec9-81bc-862adbdf3f44","Type":"ContainerDied","Data":"c00cc9fa5254b4700e1f165534a20384f99f4dce1b1cba27d73499c3c25c6c4f"} Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.285153 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce83dde7-78f3-4d87-8656-61dd112db89e" path="/var/lib/kubelet/pods/ce83dde7-78f3-4d87-8656-61dd112db89e/volumes" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.289675 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4994j" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.405950 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g5mdv" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.413033 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dzstm" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.421782 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7754\" (UniqueName: \"kubernetes.io/projected/59c4bc8e-ed79-4ec9-81bc-862adbdf3f44-kube-api-access-b7754\") pod \"59c4bc8e-ed79-4ec9-81bc-862adbdf3f44\" (UID: \"59c4bc8e-ed79-4ec9-81bc-862adbdf3f44\") " Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.431770 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c4bc8e-ed79-4ec9-81bc-862adbdf3f44-kube-api-access-b7754" (OuterVolumeSpecName: "kube-api-access-b7754") pod "59c4bc8e-ed79-4ec9-81bc-862adbdf3f44" (UID: "59c4bc8e-ed79-4ec9-81bc-862adbdf3f44"). InnerVolumeSpecName "kube-api-access-b7754". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.523656 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5xg5\" (UniqueName: \"kubernetes.io/projected/77c89bd4-a34a-46e2-9d31-13e0e657c6d3-kube-api-access-g5xg5\") pod \"77c89bd4-a34a-46e2-9d31-13e0e657c6d3\" (UID: \"77c89bd4-a34a-46e2-9d31-13e0e657c6d3\") " Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.523824 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjnmw\" (UniqueName: \"kubernetes.io/projected/76616a88-ee14-40ad-98e3-70a436c064e3-kube-api-access-fjnmw\") pod \"76616a88-ee14-40ad-98e3-70a436c064e3\" (UID: \"76616a88-ee14-40ad-98e3-70a436c064e3\") " Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.524223 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7754\" (UniqueName: \"kubernetes.io/projected/59c4bc8e-ed79-4ec9-81bc-862adbdf3f44-kube-api-access-b7754\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.527322 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c89bd4-a34a-46e2-9d31-13e0e657c6d3-kube-api-access-g5xg5" (OuterVolumeSpecName: "kube-api-access-g5xg5") pod "77c89bd4-a34a-46e2-9d31-13e0e657c6d3" (UID: "77c89bd4-a34a-46e2-9d31-13e0e657c6d3"). InnerVolumeSpecName "kube-api-access-g5xg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.529510 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76616a88-ee14-40ad-98e3-70a436c064e3-kube-api-access-fjnmw" (OuterVolumeSpecName: "kube-api-access-fjnmw") pod "76616a88-ee14-40ad-98e3-70a436c064e3" (UID: "76616a88-ee14-40ad-98e3-70a436c064e3"). InnerVolumeSpecName "kube-api-access-fjnmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.626518 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5xg5\" (UniqueName: \"kubernetes.io/projected/77c89bd4-a34a-46e2-9d31-13e0e657c6d3-kube-api-access-g5xg5\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.626581 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjnmw\" (UniqueName: \"kubernetes.io/projected/76616a88-ee14-40ad-98e3-70a436c064e3-kube-api-access-fjnmw\") on node \"crc\" DevicePath \"\"" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.939820 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g5mdv" event={"ID":"77c89bd4-a34a-46e2-9d31-13e0e657c6d3","Type":"ContainerDied","Data":"6e9ca14c265c301353d6bd65199927dbdda24df60846692d6e72e24424c62934"} Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.939875 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e9ca14c265c301353d6bd65199927dbdda24df60846692d6e72e24424c62934" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.939833 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g5mdv" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.941039 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4994j" event={"ID":"59c4bc8e-ed79-4ec9-81bc-862adbdf3f44","Type":"ContainerDied","Data":"9313a67285cd191cc5143c929caf8e263902963a08d7ae040974bc643f80d95e"} Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.941065 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4994j" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.941075 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9313a67285cd191cc5143c929caf8e263902963a08d7ae040974bc643f80d95e" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.941940 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dzstm" event={"ID":"76616a88-ee14-40ad-98e3-70a436c064e3","Type":"ContainerDied","Data":"779eac0aff36f9bb3a6a455b865b14defacd655f3f7c954e201077c240044ed1"} Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.941971 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="779eac0aff36f9bb3a6a455b865b14defacd655f3f7c954e201077c240044ed1" Oct 02 11:45:59 crc kubenswrapper[4725]: I1002 11:45:59.941983 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dzstm" Oct 02 11:46:02 crc kubenswrapper[4725]: I1002 11:46:02.972979 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kkjvm" event={"ID":"308dd5cf-d967-4ffc-821d-e94014a85ddd","Type":"ContainerStarted","Data":"01398f78e4c3d562764ad0e60ab00e4d79aca012db80f822050401eb00f6359d"} Oct 02 11:46:02 crc kubenswrapper[4725]: I1002 11:46:02.992376 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-kkjvm" podStartSLOduration=6.903251346 podStartE2EDuration="11.992349633s" podCreationTimestamp="2025-10-02 11:45:51 +0000 UTC" firstStartedPulling="2025-10-02 11:45:57.633556507 +0000 UTC m=+1077.541055970" lastFinishedPulling="2025-10-02 11:46:02.722654774 +0000 UTC m=+1082.630154257" observedRunningTime="2025-10-02 11:46:02.988388159 +0000 UTC m=+1082.895887642" watchObservedRunningTime="2025-10-02 11:46:02.992349633 +0000 UTC m=+1082.899849116" Oct 02 11:46:05 crc kubenswrapper[4725]: I1002 11:46:04.999778 4725 generic.go:334] "Generic (PLEG): container finished" podID="b90074e8-8d41-4fb8-98b8-4d202a69c345" containerID="191dfccb66de35bd8f91802f9c60a6c76a2fc654e736b846da95d29b20d4b48b" exitCode=0 Oct 02 11:46:05 crc kubenswrapper[4725]: I1002 11:46:04.999886 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9gmkp" event={"ID":"b90074e8-8d41-4fb8-98b8-4d202a69c345","Type":"ContainerDied","Data":"191dfccb66de35bd8f91802f9c60a6c76a2fc654e736b846da95d29b20d4b48b"} Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.445035 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9gmkp" Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.555448 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-db-sync-config-data\") pod \"b90074e8-8d41-4fb8-98b8-4d202a69c345\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.555561 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-combined-ca-bundle\") pod \"b90074e8-8d41-4fb8-98b8-4d202a69c345\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.555714 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-config-data\") pod \"b90074e8-8d41-4fb8-98b8-4d202a69c345\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.555774 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbmxf\" (UniqueName: \"kubernetes.io/projected/b90074e8-8d41-4fb8-98b8-4d202a69c345-kube-api-access-fbmxf\") pod \"b90074e8-8d41-4fb8-98b8-4d202a69c345\" (UID: \"b90074e8-8d41-4fb8-98b8-4d202a69c345\") " Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.563812 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b90074e8-8d41-4fb8-98b8-4d202a69c345" (UID: "b90074e8-8d41-4fb8-98b8-4d202a69c345"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.564256 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b90074e8-8d41-4fb8-98b8-4d202a69c345-kube-api-access-fbmxf" (OuterVolumeSpecName: "kube-api-access-fbmxf") pod "b90074e8-8d41-4fb8-98b8-4d202a69c345" (UID: "b90074e8-8d41-4fb8-98b8-4d202a69c345"). InnerVolumeSpecName "kube-api-access-fbmxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.584875 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b90074e8-8d41-4fb8-98b8-4d202a69c345" (UID: "b90074e8-8d41-4fb8-98b8-4d202a69c345"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.615591 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-config-data" (OuterVolumeSpecName: "config-data") pod "b90074e8-8d41-4fb8-98b8-4d202a69c345" (UID: "b90074e8-8d41-4fb8-98b8-4d202a69c345"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.658489 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.658548 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbmxf\" (UniqueName: \"kubernetes.io/projected/b90074e8-8d41-4fb8-98b8-4d202a69c345-kube-api-access-fbmxf\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.658573 4725 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:06 crc kubenswrapper[4725]: I1002 11:46:06.658604 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b90074e8-8d41-4fb8-98b8-4d202a69c345-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.024825 4725 generic.go:334] "Generic (PLEG): container finished" podID="308dd5cf-d967-4ffc-821d-e94014a85ddd" containerID="01398f78e4c3d562764ad0e60ab00e4d79aca012db80f822050401eb00f6359d" exitCode=0 Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.024898 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kkjvm" event={"ID":"308dd5cf-d967-4ffc-821d-e94014a85ddd","Type":"ContainerDied","Data":"01398f78e4c3d562764ad0e60ab00e4d79aca012db80f822050401eb00f6359d"} Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.028366 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9gmkp" event={"ID":"b90074e8-8d41-4fb8-98b8-4d202a69c345","Type":"ContainerDied","Data":"ba42b04ac44e5d920664b0df631da9f784855ee8cfaa5332bd3362b889b82619"} Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.028424 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba42b04ac44e5d920664b0df631da9f784855ee8cfaa5332bd3362b889b82619" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.028521 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9gmkp" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.462560 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-4k65l"] Oct 02 11:46:07 crc kubenswrapper[4725]: E1002 11:46:07.462901 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c4bc8e-ed79-4ec9-81bc-862adbdf3f44" containerName="mariadb-database-create" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.462914 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c4bc8e-ed79-4ec9-81bc-862adbdf3f44" containerName="mariadb-database-create" Oct 02 11:46:07 crc kubenswrapper[4725]: E1002 11:46:07.462929 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76616a88-ee14-40ad-98e3-70a436c064e3" containerName="mariadb-database-create" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.462934 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="76616a88-ee14-40ad-98e3-70a436c064e3" containerName="mariadb-database-create" Oct 02 11:46:07 crc kubenswrapper[4725]: E1002 11:46:07.462949 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90074e8-8d41-4fb8-98b8-4d202a69c345" containerName="glance-db-sync" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.462954 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90074e8-8d41-4fb8-98b8-4d202a69c345" containerName="glance-db-sync" Oct 02 11:46:07 crc kubenswrapper[4725]: E1002 11:46:07.462969 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce83dde7-78f3-4d87-8656-61dd112db89e" containerName="init" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.462975 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce83dde7-78f3-4d87-8656-61dd112db89e" containerName="init" Oct 02 11:46:07 crc kubenswrapper[4725]: E1002 11:46:07.462997 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c89bd4-a34a-46e2-9d31-13e0e657c6d3" containerName="mariadb-database-create" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.463004 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c89bd4-a34a-46e2-9d31-13e0e657c6d3" containerName="mariadb-database-create" Oct 02 11:46:07 crc kubenswrapper[4725]: E1002 11:46:07.463017 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce83dde7-78f3-4d87-8656-61dd112db89e" containerName="dnsmasq-dns" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.463022 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce83dde7-78f3-4d87-8656-61dd112db89e" containerName="dnsmasq-dns" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.463173 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b90074e8-8d41-4fb8-98b8-4d202a69c345" containerName="glance-db-sync" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.463189 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="76616a88-ee14-40ad-98e3-70a436c064e3" containerName="mariadb-database-create" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.463197 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="77c89bd4-a34a-46e2-9d31-13e0e657c6d3" containerName="mariadb-database-create" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.463214 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce83dde7-78f3-4d87-8656-61dd112db89e" containerName="dnsmasq-dns" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.463220 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c4bc8e-ed79-4ec9-81bc-862adbdf3f44" containerName="mariadb-database-create" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.463985 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.478561 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-4k65l"] Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.574585 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.574669 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.574840 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-config\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.574955 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hd6p\" (UniqueName: \"kubernetes.io/projected/a524252d-b028-4456-84da-19b0b8c320d1-kube-api-access-4hd6p\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.574987 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.575031 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.676098 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.676173 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.676199 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-config\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.676228 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hd6p\" (UniqueName: \"kubernetes.io/projected/a524252d-b028-4456-84da-19b0b8c320d1-kube-api-access-4hd6p\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.676244 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.676269 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.677163 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.677772 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.677782 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.677840 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-config\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.678020 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.696291 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hd6p\" (UniqueName: \"kubernetes.io/projected/a524252d-b028-4456-84da-19b0b8c320d1-kube-api-access-4hd6p\") pod \"dnsmasq-dns-7ff5475cc9-4k65l\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:07 crc kubenswrapper[4725]: I1002 11:46:07.787603 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:08 crc kubenswrapper[4725]: I1002 11:46:08.284521 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-4k65l"] Oct 02 11:46:08 crc kubenswrapper[4725]: W1002 11:46:08.289043 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda524252d_b028_4456_84da_19b0b8c320d1.slice/crio-7f3dc7775e196675da1cf676338f64cde180c05a3b5950a749a992f143eb129a WatchSource:0}: Error finding container 7f3dc7775e196675da1cf676338f64cde180c05a3b5950a749a992f143eb129a: Status 404 returned error can't find the container with id 7f3dc7775e196675da1cf676338f64cde180c05a3b5950a749a992f143eb129a Oct 02 11:46:08 crc kubenswrapper[4725]: I1002 11:46:08.357992 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:46:08 crc kubenswrapper[4725]: I1002 11:46:08.494636 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-combined-ca-bundle\") pod \"308dd5cf-d967-4ffc-821d-e94014a85ddd\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " Oct 02 11:46:08 crc kubenswrapper[4725]: I1002 11:46:08.494968 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-config-data\") pod \"308dd5cf-d967-4ffc-821d-e94014a85ddd\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " Oct 02 11:46:08 crc kubenswrapper[4725]: I1002 11:46:08.495094 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btd47\" (UniqueName: \"kubernetes.io/projected/308dd5cf-d967-4ffc-821d-e94014a85ddd-kube-api-access-btd47\") pod \"308dd5cf-d967-4ffc-821d-e94014a85ddd\" (UID: \"308dd5cf-d967-4ffc-821d-e94014a85ddd\") " Oct 02 11:46:08 crc kubenswrapper[4725]: I1002 11:46:08.503649 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308dd5cf-d967-4ffc-821d-e94014a85ddd-kube-api-access-btd47" (OuterVolumeSpecName: "kube-api-access-btd47") pod "308dd5cf-d967-4ffc-821d-e94014a85ddd" (UID: "308dd5cf-d967-4ffc-821d-e94014a85ddd"). InnerVolumeSpecName "kube-api-access-btd47". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:08 crc kubenswrapper[4725]: I1002 11:46:08.528693 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "308dd5cf-d967-4ffc-821d-e94014a85ddd" (UID: "308dd5cf-d967-4ffc-821d-e94014a85ddd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:08 crc kubenswrapper[4725]: I1002 11:46:08.556594 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-config-data" (OuterVolumeSpecName: "config-data") pod "308dd5cf-d967-4ffc-821d-e94014a85ddd" (UID: "308dd5cf-d967-4ffc-821d-e94014a85ddd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:08 crc kubenswrapper[4725]: I1002 11:46:08.597199 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:08 crc kubenswrapper[4725]: I1002 11:46:08.597534 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/308dd5cf-d967-4ffc-821d-e94014a85ddd-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:08 crc kubenswrapper[4725]: I1002 11:46:08.597645 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btd47\" (UniqueName: \"kubernetes.io/projected/308dd5cf-d967-4ffc-821d-e94014a85ddd-kube-api-access-btd47\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.054032 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kkjvm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.054023 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kkjvm" event={"ID":"308dd5cf-d967-4ffc-821d-e94014a85ddd","Type":"ContainerDied","Data":"e7530d37a3a20b08f1085deccdb956ecd3b6db370e088a90f95d177a204c12fb"} Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.054584 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7530d37a3a20b08f1085deccdb956ecd3b6db370e088a90f95d177a204c12fb" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.056092 4725 generic.go:334] "Generic (PLEG): container finished" podID="a524252d-b028-4456-84da-19b0b8c320d1" containerID="acb81f71e98ad3226946a5dd7c8e59f0f738125231e32e123299c1fa2ff9a58f" exitCode=0 Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.056164 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" event={"ID":"a524252d-b028-4456-84da-19b0b8c320d1","Type":"ContainerDied","Data":"acb81f71e98ad3226946a5dd7c8e59f0f738125231e32e123299c1fa2ff9a58f"} Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.056196 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" event={"ID":"a524252d-b028-4456-84da-19b0b8c320d1","Type":"ContainerStarted","Data":"7f3dc7775e196675da1cf676338f64cde180c05a3b5950a749a992f143eb129a"} Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.312909 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-4k65l"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.357061 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm"] Oct 02 11:46:09 crc kubenswrapper[4725]: E1002 11:46:09.357444 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308dd5cf-d967-4ffc-821d-e94014a85ddd" containerName="keystone-db-sync" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.357465 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="308dd5cf-d967-4ffc-821d-e94014a85ddd" containerName="keystone-db-sync" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.357621 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="308dd5cf-d967-4ffc-821d-e94014a85ddd" containerName="keystone-db-sync" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.358454 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.373423 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-w9rkq"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.374702 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.385801 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.388534 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zwkvx" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.388821 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.388976 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.389142 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.465714 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w9rkq"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515103 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-combined-ca-bundle\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515143 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-scripts\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515175 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-credential-keys\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515192 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-fernet-keys\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515218 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515240 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515273 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515294 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-config\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515327 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-config-data\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515342 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5smk\" (UniqueName: \"kubernetes.io/projected/dcb878d4-98ff-4826-a952-54adfb9656e1-kube-api-access-l5smk\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515375 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.515390 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qpsz\" (UniqueName: \"kubernetes.io/projected/e7ef6588-03db-4feb-b20d-78a45ace6749-kube-api-access-4qpsz\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.564415 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76cc7bf9c-g27mj"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.565976 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.568439 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.568835 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-hl9mw" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.569065 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.569294 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.616897 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-fernet-keys\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.617142 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.617163 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.617203 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.617223 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-config\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.617254 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-config-data\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.617271 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5smk\" (UniqueName: \"kubernetes.io/projected/dcb878d4-98ff-4826-a952-54adfb9656e1-kube-api-access-l5smk\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.617300 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.617316 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qpsz\" (UniqueName: \"kubernetes.io/projected/e7ef6588-03db-4feb-b20d-78a45ace6749-kube-api-access-4qpsz\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.617352 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-combined-ca-bundle\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.617369 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-scripts\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.617399 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-credential-keys\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.622052 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-config-data\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.624306 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.624341 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.625200 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-config\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.625247 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.629588 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.636374 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-fernet-keys\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.636713 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-credential-keys\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.640492 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76cc7bf9c-g27mj"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.647389 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-combined-ca-bundle\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.648918 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-scripts\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.655402 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5smk\" (UniqueName: \"kubernetes.io/projected/dcb878d4-98ff-4826-a952-54adfb9656e1-kube-api-access-l5smk\") pod \"dnsmasq-dns-5c5cc7c5ff-cr8gm\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.666280 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qpsz\" (UniqueName: \"kubernetes.io/projected/e7ef6588-03db-4feb-b20d-78a45ace6749-kube-api-access-4qpsz\") pod \"keystone-bootstrap-w9rkq\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.700099 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.702062 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.703194 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.704767 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.704864 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.706375 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.721493 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.722076 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-scripts\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.722197 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-config-data\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.722236 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4f8w\" (UniqueName: \"kubernetes.io/projected/be6de048-7002-4f87-9e87-14af9e141c8b-kube-api-access-n4f8w\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.722291 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be6de048-7002-4f87-9e87-14af9e141c8b-logs\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.722349 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be6de048-7002-4f87-9e87-14af9e141c8b-horizon-secret-key\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832438 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-run-httpd\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832540 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-scripts\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832568 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832588 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-config-data\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832609 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4f8w\" (UniqueName: \"kubernetes.io/projected/be6de048-7002-4f87-9e87-14af9e141c8b-kube-api-access-n4f8w\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832650 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be6de048-7002-4f87-9e87-14af9e141c8b-logs\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832698 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be6de048-7002-4f87-9e87-14af9e141c8b-horizon-secret-key\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832729 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832763 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-config-data\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832822 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-scripts\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832847 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-log-httpd\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.832870 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xtfv\" (UniqueName: \"kubernetes.io/projected/bc23aa02-f528-4674-adef-8c0793ac184d-kube-api-access-7xtfv\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.833641 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-scripts\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.845347 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.847727 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.850030 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be6de048-7002-4f87-9e87-14af9e141c8b-logs\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.850873 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-config-data\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.860726 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.861258 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jwrxm" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.861573 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.889253 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8647496bdc-w289x"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.899404 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be6de048-7002-4f87-9e87-14af9e141c8b-horizon-secret-key\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.902325 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.908631 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4f8w\" (UniqueName: \"kubernetes.io/projected/be6de048-7002-4f87-9e87-14af9e141c8b-kube-api-access-n4f8w\") pod \"horizon-76cc7bf9c-g27mj\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.909029 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.929372 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8647496bdc-w289x"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.947048 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-q2mkt"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.948925 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.951699 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-scripts\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.951780 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-log-httpd\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.951815 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xtfv\" (UniqueName: \"kubernetes.io/projected/bc23aa02-f528-4674-adef-8c0793ac184d-kube-api-access-7xtfv\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.951868 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqxr\" (UniqueName: \"kubernetes.io/projected/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-kube-api-access-pvqxr\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.951920 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-run-httpd\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.951943 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.951971 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-logs\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.952062 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.952156 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-config-data\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.952376 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.952441 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.952476 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.952499 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-config-data\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.952579 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-scripts\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.955866 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.956114 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jm4cc" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.956255 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.964814 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q2mkt"] Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.964837 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-log-httpd\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.965054 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-run-httpd\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.965673 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-scripts\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.979791 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-config-data\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.984396 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:09 crc kubenswrapper[4725]: I1002 11:46:09.995651 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.000675 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm"] Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.023557 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xtfv\" (UniqueName: \"kubernetes.io/projected/bc23aa02-f528-4674-adef-8c0793ac184d-kube-api-access-7xtfv\") pod \"ceilometer-0\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " pod="openstack/ceilometer-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.023623 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rhqpv"] Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.025167 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.029857 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rhqpv"] Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056108 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-combined-ca-bundle\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056146 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/eca38056-f65f-4d74-b0d2-258e660bba10-horizon-secret-key\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056172 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-scripts\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056198 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-config-data\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056225 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-config-data\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056250 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056272 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056292 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbj6q\" (UniqueName: \"kubernetes.io/projected/02914303-44fa-48fc-842e-d0876f44e300-kube-api-access-kbj6q\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056318 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056334 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-config\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056348 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056362 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca38056-f65f-4d74-b0d2-258e660bba10-logs\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056408 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056438 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-scripts\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056478 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqxr\" (UniqueName: \"kubernetes.io/projected/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-kube-api-access-pvqxr\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056501 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056517 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69gzg\" (UniqueName: \"kubernetes.io/projected/eca38056-f65f-4d74-b0d2-258e660bba10-kube-api-access-69gzg\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-logs\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056554 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-scripts\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056572 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2qwg\" (UniqueName: \"kubernetes.io/projected/050130d8-978e-40e2-9869-ffdbcf50da81-kube-api-access-x2qwg\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056589 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-config-data\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056612 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050130d8-978e-40e2-9869-ffdbcf50da81-logs\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.056638 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.059213 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-logs\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.062638 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.063025 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.067142 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-scripts\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.084170 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.085933 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" event={"ID":"a524252d-b028-4456-84da-19b0b8c320d1","Type":"ContainerStarted","Data":"5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842"} Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.086096 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" podUID="a524252d-b028-4456-84da-19b0b8c320d1" containerName="dnsmasq-dns" containerID="cri-o://5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842" gracePeriod=10 Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.086182 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.091796 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-config-data\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.095039 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqxr\" (UniqueName: \"kubernetes.io/projected/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-kube-api-access-pvqxr\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.109821 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" podStartSLOduration=3.109802393 podStartE2EDuration="3.109802393s" podCreationTimestamp="2025-10-02 11:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:10.107761059 +0000 UTC m=+1090.015260532" watchObservedRunningTime="2025-10-02 11:46:10.109802393 +0000 UTC m=+1090.017301856" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.110372 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158188 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2qwg\" (UniqueName: \"kubernetes.io/projected/050130d8-978e-40e2-9869-ffdbcf50da81-kube-api-access-x2qwg\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158239 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-config-data\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158266 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050130d8-978e-40e2-9869-ffdbcf50da81-logs\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158287 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158319 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-combined-ca-bundle\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158342 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/eca38056-f65f-4d74-b0d2-258e660bba10-horizon-secret-key\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158386 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-scripts\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158412 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-config-data\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158457 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158477 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbj6q\" (UniqueName: \"kubernetes.io/projected/02914303-44fa-48fc-842e-d0876f44e300-kube-api-access-kbj6q\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158505 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-config\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158525 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158547 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca38056-f65f-4d74-b0d2-258e660bba10-logs\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158577 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158643 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69gzg\" (UniqueName: \"kubernetes.io/projected/eca38056-f65f-4d74-b0d2-258e660bba10-kube-api-access-69gzg\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.158671 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-scripts\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.159497 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-scripts\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.161535 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.161535 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.162295 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-config\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.162853 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050130d8-978e-40e2-9869-ffdbcf50da81-logs\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.163048 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.163285 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-config-data\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.163431 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca38056-f65f-4d74-b0d2-258e660bba10-logs\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.165440 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.166398 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-config-data\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.168020 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/eca38056-f65f-4d74-b0d2-258e660bba10-horizon-secret-key\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.171204 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-scripts\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.177457 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-combined-ca-bundle\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.178057 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2qwg\" (UniqueName: \"kubernetes.io/projected/050130d8-978e-40e2-9869-ffdbcf50da81-kube-api-access-x2qwg\") pod \"placement-db-sync-q2mkt\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.179678 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69gzg\" (UniqueName: \"kubernetes.io/projected/eca38056-f65f-4d74-b0d2-258e660bba10-kube-api-access-69gzg\") pod \"horizon-8647496bdc-w289x\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.181329 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbj6q\" (UniqueName: \"kubernetes.io/projected/02914303-44fa-48fc-842e-d0876f44e300-kube-api-access-kbj6q\") pod \"dnsmasq-dns-8b5c85b87-rhqpv\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.199482 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.256612 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.276984 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.303637 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.321773 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q2mkt" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.351920 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.494357 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm"] Oct 02 11:46:10 crc kubenswrapper[4725]: W1002 11:46:10.584853 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7ef6588_03db_4feb_b20d_78a45ace6749.slice/crio-96dd9c4ec668ec41c16027c6f37231a060cc624814a274ba37d1e3132d2b3edc WatchSource:0}: Error finding container 96dd9c4ec668ec41c16027c6f37231a060cc624814a274ba37d1e3132d2b3edc: Status 404 returned error can't find the container with id 96dd9c4ec668ec41c16027c6f37231a060cc624814a274ba37d1e3132d2b3edc Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.589930 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w9rkq"] Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.607641 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:10 crc kubenswrapper[4725]: W1002 11:46:10.614477 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcb878d4_98ff_4826_a952_54adfb9656e1.slice/crio-27299095c6dc4309e9826ef38139a290097a8368c8284e8034879234ea3b04da WatchSource:0}: Error finding container 27299095c6dc4309e9826ef38139a290097a8368c8284e8034879234ea3b04da: Status 404 returned error can't find the container with id 27299095c6dc4309e9826ef38139a290097a8368c8284e8034879234ea3b04da Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.681350 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-sb\") pod \"a524252d-b028-4456-84da-19b0b8c320d1\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.681498 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-svc\") pod \"a524252d-b028-4456-84da-19b0b8c320d1\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.681524 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hd6p\" (UniqueName: \"kubernetes.io/projected/a524252d-b028-4456-84da-19b0b8c320d1-kube-api-access-4hd6p\") pod \"a524252d-b028-4456-84da-19b0b8c320d1\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.681551 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-nb\") pod \"a524252d-b028-4456-84da-19b0b8c320d1\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.681583 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-config\") pod \"a524252d-b028-4456-84da-19b0b8c320d1\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.681693 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-swift-storage-0\") pod \"a524252d-b028-4456-84da-19b0b8c320d1\" (UID: \"a524252d-b028-4456-84da-19b0b8c320d1\") " Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.689330 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:46:10 crc kubenswrapper[4725]: E1002 11:46:10.689689 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a524252d-b028-4456-84da-19b0b8c320d1" containerName="init" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.689704 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a524252d-b028-4456-84da-19b0b8c320d1" containerName="init" Oct 02 11:46:10 crc kubenswrapper[4725]: E1002 11:46:10.689728 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a524252d-b028-4456-84da-19b0b8c320d1" containerName="dnsmasq-dns" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.689734 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a524252d-b028-4456-84da-19b0b8c320d1" containerName="dnsmasq-dns" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.694245 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a524252d-b028-4456-84da-19b0b8c320d1" containerName="dnsmasq-dns" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.695407 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a524252d-b028-4456-84da-19b0b8c320d1-kube-api-access-4hd6p" (OuterVolumeSpecName: "kube-api-access-4hd6p") pod "a524252d-b028-4456-84da-19b0b8c320d1" (UID: "a524252d-b028-4456-84da-19b0b8c320d1"). InnerVolumeSpecName "kube-api-access-4hd6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.695535 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.705973 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.760912 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a524252d-b028-4456-84da-19b0b8c320d1" (UID: "a524252d-b028-4456-84da-19b0b8c320d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.780904 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.783208 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.783384 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hd6p\" (UniqueName: \"kubernetes.io/projected/a524252d-b028-4456-84da-19b0b8c320d1-kube-api-access-4hd6p\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.789327 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a524252d-b028-4456-84da-19b0b8c320d1" (UID: "a524252d-b028-4456-84da-19b0b8c320d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.789610 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a524252d-b028-4456-84da-19b0b8c320d1" (UID: "a524252d-b028-4456-84da-19b0b8c320d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.802004 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a524252d-b028-4456-84da-19b0b8c320d1" (UID: "a524252d-b028-4456-84da-19b0b8c320d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.814034 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-config" (OuterVolumeSpecName: "config") pod "a524252d-b028-4456-84da-19b0b8c320d1" (UID: "a524252d-b028-4456-84da-19b0b8c320d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.877800 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76cc7bf9c-g27mj"] Oct 02 11:46:10 crc kubenswrapper[4725]: W1002 11:46:10.885248 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe6de048_7002_4f87_9e87_14af9e141c8b.slice/crio-c222f21676b0543148e483b56a7ffe7216b85f77ec377ab66cf2d0b90d909c2e WatchSource:0}: Error finding container c222f21676b0543148e483b56a7ffe7216b85f77ec377ab66cf2d0b90d909c2e: Status 404 returned error can't find the container with id c222f21676b0543148e483b56a7ffe7216b85f77ec377ab66cf2d0b90d909c2e Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.885269 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-logs\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.885390 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.885457 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.885504 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.885649 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.885683 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zkf\" (UniqueName: \"kubernetes.io/projected/61ba599e-0338-4e5a-9009-a7a5a4f66747-kube-api-access-85zkf\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.885803 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.885959 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.885980 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.885993 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.886007 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a524252d-b028-4456-84da-19b0b8c320d1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.987681 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.988170 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.988210 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.988356 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.988310 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.988506 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85zkf\" (UniqueName: \"kubernetes.io/projected/61ba599e-0338-4e5a-9009-a7a5a4f66747-kube-api-access-85zkf\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.988646 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.988880 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-logs\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.989244 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-logs\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:10 crc kubenswrapper[4725]: I1002 11:46:10.990395 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.000925 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.005327 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.007061 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.011372 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85zkf\" (UniqueName: \"kubernetes.io/projected/61ba599e-0338-4e5a-9009-a7a5a4f66747-kube-api-access-85zkf\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.016497 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.028533 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.103762 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76cc7bf9c-g27mj" event={"ID":"be6de048-7002-4f87-9e87-14af9e141c8b","Type":"ContainerStarted","Data":"c222f21676b0543148e483b56a7ffe7216b85f77ec377ab66cf2d0b90d909c2e"} Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.104269 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.114517 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" event={"ID":"dcb878d4-98ff-4826-a952-54adfb9656e1","Type":"ContainerStarted","Data":"27299095c6dc4309e9826ef38139a290097a8368c8284e8034879234ea3b04da"} Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.115312 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a7dd-account-create-7lhl9"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.116501 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a7dd-account-create-7lhl9" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.119276 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.120112 4725 generic.go:334] "Generic (PLEG): container finished" podID="a524252d-b028-4456-84da-19b0b8c320d1" containerID="5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842" exitCode=0 Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.120240 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.120867 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" event={"ID":"a524252d-b028-4456-84da-19b0b8c320d1","Type":"ContainerDied","Data":"5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842"} Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.120905 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-4k65l" event={"ID":"a524252d-b028-4456-84da-19b0b8c320d1","Type":"ContainerDied","Data":"7f3dc7775e196675da1cf676338f64cde180c05a3b5950a749a992f143eb129a"} Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.120922 4725 scope.go:117] "RemoveContainer" containerID="5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.121667 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc23aa02-f528-4674-adef-8c0793ac184d","Type":"ContainerStarted","Data":"6c47dee1c7c2dbddf59800472a18a20edf3732c515932bd0b5a38fc69d9dcfb3"} Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.123309 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9rkq" event={"ID":"e7ef6588-03db-4feb-b20d-78a45ace6749","Type":"ContainerStarted","Data":"96dd9c4ec668ec41c16027c6f37231a060cc624814a274ba37d1e3132d2b3edc"} Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.125830 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a7dd-account-create-7lhl9"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.174107 4725 scope.go:117] "RemoveContainer" containerID="acb81f71e98ad3226946a5dd7c8e59f0f738125231e32e123299c1fa2ff9a58f" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.185633 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-4k65l"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.198074 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-4k65l"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.224553 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7382-account-create-kfv6s"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.225633 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7382-account-create-kfv6s" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.227881 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.233701 4725 scope.go:117] "RemoveContainer" containerID="5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842" Oct 02 11:46:11 crc kubenswrapper[4725]: E1002 11:46:11.234030 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842\": container with ID starting with 5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842 not found: ID does not exist" containerID="5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.234055 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842"} err="failed to get container status \"5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842\": rpc error: code = NotFound desc = could not find container \"5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842\": container with ID starting with 5304363857b215a4f2149375763335de4fb43fea48223503dc7df2d2a728c842 not found: ID does not exist" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.234075 4725 scope.go:117] "RemoveContainer" containerID="acb81f71e98ad3226946a5dd7c8e59f0f738125231e32e123299c1fa2ff9a58f" Oct 02 11:46:11 crc kubenswrapper[4725]: E1002 11:46:11.234454 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb81f71e98ad3226946a5dd7c8e59f0f738125231e32e123299c1fa2ff9a58f\": container with ID starting with acb81f71e98ad3226946a5dd7c8e59f0f738125231e32e123299c1fa2ff9a58f not found: ID does not exist" containerID="acb81f71e98ad3226946a5dd7c8e59f0f738125231e32e123299c1fa2ff9a58f" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.234472 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb81f71e98ad3226946a5dd7c8e59f0f738125231e32e123299c1fa2ff9a58f"} err="failed to get container status \"acb81f71e98ad3226946a5dd7c8e59f0f738125231e32e123299c1fa2ff9a58f\": rpc error: code = NotFound desc = could not find container \"acb81f71e98ad3226946a5dd7c8e59f0f738125231e32e123299c1fa2ff9a58f\": container with ID starting with acb81f71e98ad3226946a5dd7c8e59f0f738125231e32e123299c1fa2ff9a58f not found: ID does not exist" Oct 02 11:46:11 crc kubenswrapper[4725]: W1002 11:46:11.247062 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeca38056_f65f_4d74_b0d2_258e660bba10.slice/crio-64fc8c882bc2eafd02bd4c54c9f2f468c1ded2da6a5746badd79aea644570e6a WatchSource:0}: Error finding container 64fc8c882bc2eafd02bd4c54c9f2f468c1ded2da6a5746badd79aea644570e6a: Status 404 returned error can't find the container with id 64fc8c882bc2eafd02bd4c54c9f2f468c1ded2da6a5746badd79aea644570e6a Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.299342 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28mbk\" (UniqueName: \"kubernetes.io/projected/d384f0dd-dd01-4a5c-89d2-7c5981ad434d-kube-api-access-28mbk\") pod \"cinder-a7dd-account-create-7lhl9\" (UID: \"d384f0dd-dd01-4a5c-89d2-7c5981ad434d\") " pod="openstack/cinder-a7dd-account-create-7lhl9" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.340296 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a524252d-b028-4456-84da-19b0b8c320d1" path="/var/lib/kubelet/pods/a524252d-b028-4456-84da-19b0b8c320d1/volumes" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.343632 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7382-account-create-kfv6s"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.343669 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8647496bdc-w289x"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.343680 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q2mkt"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.364240 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rhqpv"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.391191 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.406167 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28mbk\" (UniqueName: \"kubernetes.io/projected/d384f0dd-dd01-4a5c-89d2-7c5981ad434d-kube-api-access-28mbk\") pod \"cinder-a7dd-account-create-7lhl9\" (UID: \"d384f0dd-dd01-4a5c-89d2-7c5981ad434d\") " pod="openstack/cinder-a7dd-account-create-7lhl9" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.406563 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6q59\" (UniqueName: \"kubernetes.io/projected/4a8b387f-3df2-4803-b785-99815cea430a-kube-api-access-d6q59\") pod \"barbican-7382-account-create-kfv6s\" (UID: \"4a8b387f-3df2-4803-b785-99815cea430a\") " pod="openstack/barbican-7382-account-create-kfv6s" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.435864 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28mbk\" (UniqueName: \"kubernetes.io/projected/d384f0dd-dd01-4a5c-89d2-7c5981ad434d-kube-api-access-28mbk\") pod \"cinder-a7dd-account-create-7lhl9\" (UID: \"d384f0dd-dd01-4a5c-89d2-7c5981ad434d\") " pod="openstack/cinder-a7dd-account-create-7lhl9" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.476962 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a7dd-account-create-7lhl9" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.499499 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-daa4-account-create-gbmhh"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.501102 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daa4-account-create-gbmhh" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.506182 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.508494 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6q59\" (UniqueName: \"kubernetes.io/projected/4a8b387f-3df2-4803-b785-99815cea430a-kube-api-access-d6q59\") pod \"barbican-7382-account-create-kfv6s\" (UID: \"4a8b387f-3df2-4803-b785-99815cea430a\") " pod="openstack/barbican-7382-account-create-kfv6s" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.514633 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-daa4-account-create-gbmhh"] Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.536700 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6q59\" (UniqueName: \"kubernetes.io/projected/4a8b387f-3df2-4803-b785-99815cea430a-kube-api-access-d6q59\") pod \"barbican-7382-account-create-kfv6s\" (UID: \"4a8b387f-3df2-4803-b785-99815cea430a\") " pod="openstack/barbican-7382-account-create-kfv6s" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.552875 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7382-account-create-kfv6s" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.609885 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll259\" (UniqueName: \"kubernetes.io/projected/480d8d51-eb78-4bce-8f31-0ff29e6ac822-kube-api-access-ll259\") pod \"neutron-daa4-account-create-gbmhh\" (UID: \"480d8d51-eb78-4bce-8f31-0ff29e6ac822\") " pod="openstack/neutron-daa4-account-create-gbmhh" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.711396 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll259\" (UniqueName: \"kubernetes.io/projected/480d8d51-eb78-4bce-8f31-0ff29e6ac822-kube-api-access-ll259\") pod \"neutron-daa4-account-create-gbmhh\" (UID: \"480d8d51-eb78-4bce-8f31-0ff29e6ac822\") " pod="openstack/neutron-daa4-account-create-gbmhh" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.729621 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll259\" (UniqueName: \"kubernetes.io/projected/480d8d51-eb78-4bce-8f31-0ff29e6ac822-kube-api-access-ll259\") pod \"neutron-daa4-account-create-gbmhh\" (UID: \"480d8d51-eb78-4bce-8f31-0ff29e6ac822\") " pod="openstack/neutron-daa4-account-create-gbmhh" Oct 02 11:46:11 crc kubenswrapper[4725]: I1002 11:46:11.824828 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:11.844639 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daa4-account-create-gbmhh" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.041886 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a7dd-account-create-7lhl9"] Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.054254 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7382-account-create-kfv6s"] Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.147025 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7382-account-create-kfv6s" event={"ID":"4a8b387f-3df2-4803-b785-99815cea430a","Type":"ContainerStarted","Data":"75292678e0c833a0f2a07f48b3cd02eb4c91b09de34b7575b81fd5f2a622c54e"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.150681 4725 generic.go:334] "Generic (PLEG): container finished" podID="02914303-44fa-48fc-842e-d0876f44e300" containerID="1e6fedd1f9f7de0914ad38ef6718da90f135adde5a4024ecab8e3d9c6dd5735b" exitCode=0 Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.150850 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" event={"ID":"02914303-44fa-48fc-842e-d0876f44e300","Type":"ContainerDied","Data":"1e6fedd1f9f7de0914ad38ef6718da90f135adde5a4024ecab8e3d9c6dd5735b"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.150892 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" event={"ID":"02914303-44fa-48fc-842e-d0876f44e300","Type":"ContainerStarted","Data":"ce9cd3187747401f75ff3090f436c64d1eebf97a57b74760821bd18d252b83f0"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.153962 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q2mkt" event={"ID":"050130d8-978e-40e2-9869-ffdbcf50da81","Type":"ContainerStarted","Data":"17de426e7cea762e39e88a3ca16301f7abfca11e27756a0d2071609dc2c6e76b"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.157142 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc","Type":"ContainerStarted","Data":"ef7ebed5097a27d2f2d2ae5a908a4b377d942249a1e26f728931577eb038f2ba"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.174459 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a7dd-account-create-7lhl9" event={"ID":"d384f0dd-dd01-4a5c-89d2-7c5981ad434d","Type":"ContainerStarted","Data":"16d3076985e43d953351db97f87b45a9197bd1183631292bea966fba8f0850a3"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.177531 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8647496bdc-w289x" event={"ID":"eca38056-f65f-4d74-b0d2-258e660bba10","Type":"ContainerStarted","Data":"64fc8c882bc2eafd02bd4c54c9f2f468c1ded2da6a5746badd79aea644570e6a"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.180000 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9rkq" event={"ID":"e7ef6588-03db-4feb-b20d-78a45ace6749","Type":"ContainerStarted","Data":"7a497cedd27c199a591fb3a06d42c73d74bf658c1b0dbb5ed50c365c1e124b5f"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.196013 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-w9rkq" podStartSLOduration=3.19599442 podStartE2EDuration="3.19599442s" podCreationTimestamp="2025-10-02 11:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:12.195807395 +0000 UTC m=+1092.103306858" watchObservedRunningTime="2025-10-02 11:46:12.19599442 +0000 UTC m=+1092.103493883" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.199303 4725 generic.go:334] "Generic (PLEG): container finished" podID="dcb878d4-98ff-4826-a952-54adfb9656e1" containerID="049befb5a9d0ffb83e80b15bb5911d99e9c1cb4638ae81ebe055248aca819a8f" exitCode=0 Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.199428 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" event={"ID":"dcb878d4-98ff-4826-a952-54adfb9656e1","Type":"ContainerDied","Data":"049befb5a9d0ffb83e80b15bb5911d99e9c1cb4638ae81ebe055248aca819a8f"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:12.210031 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61ba599e-0338-4e5a-9009-a7a5a4f66747","Type":"ContainerStarted","Data":"82a9a4a046f7c9a8784afd7eb5242052bf122d6e19ac84caee246e48087a89b8"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.238310 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a7dd-account-create-7lhl9" event={"ID":"d384f0dd-dd01-4a5c-89d2-7c5981ad434d","Type":"ContainerStarted","Data":"20675bdf17716c9c9f01243837d6d98ffc919aefaf319744b4fcdd280d9cb10c"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.246184 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc","Type":"ContainerStarted","Data":"7e63f66f70d06cb45f66a7ab8160680fe814c10c2707bfd4ccaf9045de1513f7"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.255776 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7382-account-create-kfv6s" event={"ID":"4a8b387f-3df2-4803-b785-99815cea430a","Type":"ContainerStarted","Data":"c6986b71f14ebbd4af9f33592aee7688302b2447f72519436e1afd2716aab812"} Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.350918 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.356369 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76cc7bf9c-g27mj"] Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.398806 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7595c47df5-46cnx"] Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.400635 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.411635 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7595c47df5-46cnx"] Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.448572 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.585331 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/82823ab6-7a43-44da-8ad3-33edd043d777-horizon-secret-key\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.585384 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-config-data\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.585569 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82823ab6-7a43-44da-8ad3-33edd043d777-logs\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.585677 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-scripts\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.585884 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4slt\" (UniqueName: \"kubernetes.io/projected/82823ab6-7a43-44da-8ad3-33edd043d777-kube-api-access-g4slt\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.687833 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/82823ab6-7a43-44da-8ad3-33edd043d777-horizon-secret-key\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.687887 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-config-data\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.687963 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82823ab6-7a43-44da-8ad3-33edd043d777-logs\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.687991 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-scripts\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.688035 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4slt\" (UniqueName: \"kubernetes.io/projected/82823ab6-7a43-44da-8ad3-33edd043d777-kube-api-access-g4slt\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.688675 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82823ab6-7a43-44da-8ad3-33edd043d777-logs\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.689153 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-scripts\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.689557 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-config-data\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.701980 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4slt\" (UniqueName: \"kubernetes.io/projected/82823ab6-7a43-44da-8ad3-33edd043d777-kube-api-access-g4slt\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.708780 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/82823ab6-7a43-44da-8ad3-33edd043d777-horizon-secret-key\") pod \"horizon-7595c47df5-46cnx\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:13.724725 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:14.283091 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-a7dd-account-create-7lhl9" podStartSLOduration=3.283074621 podStartE2EDuration="3.283074621s" podCreationTimestamp="2025-10-02 11:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:14.277458833 +0000 UTC m=+1094.184958296" watchObservedRunningTime="2025-10-02 11:46:14.283074621 +0000 UTC m=+1094.190574084" Oct 02 11:46:19 crc kubenswrapper[4725]: I1002 11:46:18.312054 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" event={"ID":"02914303-44fa-48fc-842e-d0876f44e300","Type":"ContainerStarted","Data":"6916d612f2ae5c2678c41664a2a0420b0467c7528c94a7f744a1c4edd24af1c3"} Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.158097 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.338328 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61ba599e-0338-4e5a-9009-a7a5a4f66747","Type":"ContainerStarted","Data":"a558edd5103a0fc5170ff5551f41cb94cbb3f99a24556655f28a980e032fd318"} Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.347850 4725 generic.go:334] "Generic (PLEG): container finished" podID="4a8b387f-3df2-4803-b785-99815cea430a" containerID="c6986b71f14ebbd4af9f33592aee7688302b2447f72519436e1afd2716aab812" exitCode=0 Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.348279 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7382-account-create-kfv6s" event={"ID":"4a8b387f-3df2-4803-b785-99815cea430a","Type":"ContainerDied","Data":"c6986b71f14ebbd4af9f33592aee7688302b2447f72519436e1afd2716aab812"} Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.353204 4725 generic.go:334] "Generic (PLEG): container finished" podID="d384f0dd-dd01-4a5c-89d2-7c5981ad434d" containerID="20675bdf17716c9c9f01243837d6d98ffc919aefaf319744b4fcdd280d9cb10c" exitCode=0 Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.353278 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a7dd-account-create-7lhl9" event={"ID":"d384f0dd-dd01-4a5c-89d2-7c5981ad434d","Type":"ContainerDied","Data":"20675bdf17716c9c9f01243837d6d98ffc919aefaf319744b4fcdd280d9cb10c"} Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.355919 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc","Type":"ContainerStarted","Data":"a49788d91b3917667323db768960d4051702c541aff145d2b41119aea500f1a5"} Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.356042 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.356052 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" containerName="glance-log" containerID="cri-o://7e63f66f70d06cb45f66a7ab8160680fe814c10c2707bfd4ccaf9045de1513f7" gracePeriod=30 Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.356096 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" containerName="glance-httpd" containerID="cri-o://a49788d91b3917667323db768960d4051702c541aff145d2b41119aea500f1a5" gracePeriod=30 Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.398037 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" podStartSLOduration=11.398016771 podStartE2EDuration="11.398016771s" podCreationTimestamp="2025-10-02 11:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:20.393135812 +0000 UTC m=+1100.300635295" watchObservedRunningTime="2025-10-02 11:46:20.398016771 +0000 UTC m=+1100.305516234" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.440103 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.440059599 podStartE2EDuration="11.440059599s" podCreationTimestamp="2025-10-02 11:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:20.420694469 +0000 UTC m=+1100.328193932" watchObservedRunningTime="2025-10-02 11:46:20.440059599 +0000 UTC m=+1100.347559082" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.461549 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7595c47df5-46cnx"] Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.474061 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-daa4-account-create-gbmhh"] Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.527396 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8647496bdc-w289x"] Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.566366 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74988cc86b-c7lcm"] Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.568062 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.579442 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.581925 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74988cc86b-c7lcm"] Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.624024 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7595c47df5-46cnx"] Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.658938 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b797cdcc6-7cf2m"] Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.660259 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.689174 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b797cdcc6-7cf2m"] Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.714428 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-scripts\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.714492 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9k2m\" (UniqueName: \"kubernetes.io/projected/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-kube-api-access-h9k2m\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.714686 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-combined-ca-bundle\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.714817 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-tls-certs\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.714842 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-config-data\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.714911 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-logs\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.714983 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-secret-key\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816153 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-horizon-tls-certs\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816220 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-scripts\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816295 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-combined-ca-bundle\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816340 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-config-data\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816363 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-tls-certs\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816386 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-horizon-secret-key\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816409 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-config-data\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816455 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-logs\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816496 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kthsg\" (UniqueName: \"kubernetes.io/projected/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-kube-api-access-kthsg\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816525 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-secret-key\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816555 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-combined-ca-bundle\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816588 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-scripts\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816622 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9k2m\" (UniqueName: \"kubernetes.io/projected/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-kube-api-access-h9k2m\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816838 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-logs\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.816900 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-logs\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.817433 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-scripts\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.817957 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-config-data\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.825429 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-secret-key\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.836628 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-combined-ca-bundle\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.837145 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-tls-certs\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.842804 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9k2m\" (UniqueName: \"kubernetes.io/projected/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-kube-api-access-h9k2m\") pod \"horizon-74988cc86b-c7lcm\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.902282 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.918711 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kthsg\" (UniqueName: \"kubernetes.io/projected/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-kube-api-access-kthsg\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.918856 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-combined-ca-bundle\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.918974 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-logs\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.919058 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-horizon-tls-certs\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.919136 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-scripts\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.919508 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-config-data\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.919543 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-horizon-secret-key\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.920070 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-logs\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.921415 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-scripts\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.923338 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-combined-ca-bundle\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.924424 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-config-data\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.925076 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-horizon-tls-certs\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.928467 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-horizon-secret-key\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.942465 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kthsg\" (UniqueName: \"kubernetes.io/projected/9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8-kube-api-access-kthsg\") pod \"horizon-b797cdcc6-7cf2m\" (UID: \"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8\") " pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:20 crc kubenswrapper[4725]: I1002 11:46:20.982562 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:21 crc kubenswrapper[4725]: I1002 11:46:21.366154 4725 generic.go:334] "Generic (PLEG): container finished" podID="e7ef6588-03db-4feb-b20d-78a45ace6749" containerID="7a497cedd27c199a591fb3a06d42c73d74bf658c1b0dbb5ed50c365c1e124b5f" exitCode=0 Oct 02 11:46:21 crc kubenswrapper[4725]: I1002 11:46:21.366406 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9rkq" event={"ID":"e7ef6588-03db-4feb-b20d-78a45ace6749","Type":"ContainerDied","Data":"7a497cedd27c199a591fb3a06d42c73d74bf658c1b0dbb5ed50c365c1e124b5f"} Oct 02 11:46:21 crc kubenswrapper[4725]: I1002 11:46:21.378921 4725 generic.go:334] "Generic (PLEG): container finished" podID="af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" containerID="a49788d91b3917667323db768960d4051702c541aff145d2b41119aea500f1a5" exitCode=0 Oct 02 11:46:21 crc kubenswrapper[4725]: I1002 11:46:21.378952 4725 generic.go:334] "Generic (PLEG): container finished" podID="af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" containerID="7e63f66f70d06cb45f66a7ab8160680fe814c10c2707bfd4ccaf9045de1513f7" exitCode=143 Oct 02 11:46:21 crc kubenswrapper[4725]: I1002 11:46:21.378954 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc","Type":"ContainerDied","Data":"a49788d91b3917667323db768960d4051702c541aff145d2b41119aea500f1a5"} Oct 02 11:46:21 crc kubenswrapper[4725]: I1002 11:46:21.378998 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc","Type":"ContainerDied","Data":"7e63f66f70d06cb45f66a7ab8160680fe814c10c2707bfd4ccaf9045de1513f7"} Oct 02 11:46:22 crc kubenswrapper[4725]: W1002 11:46:22.610947 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82823ab6_7a43_44da_8ad3_33edd043d777.slice/crio-2d504520f3333cc4cbb13347147056d5fa0ac157d12979150d513dc03a61ab59 WatchSource:0}: Error finding container 2d504520f3333cc4cbb13347147056d5fa0ac157d12979150d513dc03a61ab59: Status 404 returned error can't find the container with id 2d504520f3333cc4cbb13347147056d5fa0ac157d12979150d513dc03a61ab59 Oct 02 11:46:22 crc kubenswrapper[4725]: W1002 11:46:22.616028 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod480d8d51_eb78_4bce_8f31_0ff29e6ac822.slice/crio-bc3c79c55e57c683d93935eb4e549f22b630bafc044eaf0d56586e417642e9c6 WatchSource:0}: Error finding container bc3c79c55e57c683d93935eb4e549f22b630bafc044eaf0d56586e417642e9c6: Status 404 returned error can't find the container with id bc3c79c55e57c683d93935eb4e549f22b630bafc044eaf0d56586e417642e9c6 Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.707879 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.861237 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5smk\" (UniqueName: \"kubernetes.io/projected/dcb878d4-98ff-4826-a952-54adfb9656e1-kube-api-access-l5smk\") pod \"dcb878d4-98ff-4826-a952-54adfb9656e1\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.861331 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-sb\") pod \"dcb878d4-98ff-4826-a952-54adfb9656e1\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.861388 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-config\") pod \"dcb878d4-98ff-4826-a952-54adfb9656e1\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.861435 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-nb\") pod \"dcb878d4-98ff-4826-a952-54adfb9656e1\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.862139 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-swift-storage-0\") pod \"dcb878d4-98ff-4826-a952-54adfb9656e1\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.862188 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-svc\") pod \"dcb878d4-98ff-4826-a952-54adfb9656e1\" (UID: \"dcb878d4-98ff-4826-a952-54adfb9656e1\") " Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.867949 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb878d4-98ff-4826-a952-54adfb9656e1-kube-api-access-l5smk" (OuterVolumeSpecName: "kube-api-access-l5smk") pod "dcb878d4-98ff-4826-a952-54adfb9656e1" (UID: "dcb878d4-98ff-4826-a952-54adfb9656e1"). InnerVolumeSpecName "kube-api-access-l5smk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.885306 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dcb878d4-98ff-4826-a952-54adfb9656e1" (UID: "dcb878d4-98ff-4826-a952-54adfb9656e1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.886036 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dcb878d4-98ff-4826-a952-54adfb9656e1" (UID: "dcb878d4-98ff-4826-a952-54adfb9656e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.888069 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-config" (OuterVolumeSpecName: "config") pod "dcb878d4-98ff-4826-a952-54adfb9656e1" (UID: "dcb878d4-98ff-4826-a952-54adfb9656e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.892476 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dcb878d4-98ff-4826-a952-54adfb9656e1" (UID: "dcb878d4-98ff-4826-a952-54adfb9656e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.917203 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dcb878d4-98ff-4826-a952-54adfb9656e1" (UID: "dcb878d4-98ff-4826-a952-54adfb9656e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.964036 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5smk\" (UniqueName: \"kubernetes.io/projected/dcb878d4-98ff-4826-a952-54adfb9656e1-kube-api-access-l5smk\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.964072 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.964082 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.964091 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.964100 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:22 crc kubenswrapper[4725]: I1002 11:46:22.964109 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcb878d4-98ff-4826-a952-54adfb9656e1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:23 crc kubenswrapper[4725]: I1002 11:46:23.406544 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-daa4-account-create-gbmhh" event={"ID":"480d8d51-eb78-4bce-8f31-0ff29e6ac822","Type":"ContainerStarted","Data":"bc3c79c55e57c683d93935eb4e549f22b630bafc044eaf0d56586e417642e9c6"} Oct 02 11:46:23 crc kubenswrapper[4725]: I1002 11:46:23.407509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7595c47df5-46cnx" event={"ID":"82823ab6-7a43-44da-8ad3-33edd043d777","Type":"ContainerStarted","Data":"2d504520f3333cc4cbb13347147056d5fa0ac157d12979150d513dc03a61ab59"} Oct 02 11:46:23 crc kubenswrapper[4725]: I1002 11:46:23.409343 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" event={"ID":"dcb878d4-98ff-4826-a952-54adfb9656e1","Type":"ContainerDied","Data":"27299095c6dc4309e9826ef38139a290097a8368c8284e8034879234ea3b04da"} Oct 02 11:46:23 crc kubenswrapper[4725]: I1002 11:46:23.409375 4725 scope.go:117] "RemoveContainer" containerID="049befb5a9d0ffb83e80b15bb5911d99e9c1cb4638ae81ebe055248aca819a8f" Oct 02 11:46:23 crc kubenswrapper[4725]: I1002 11:46:23.409559 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm" Oct 02 11:46:23 crc kubenswrapper[4725]: I1002 11:46:23.479193 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm"] Oct 02 11:46:23 crc kubenswrapper[4725]: I1002 11:46:23.487239 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cr8gm"] Oct 02 11:46:25 crc kubenswrapper[4725]: I1002 11:46:25.279001 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcb878d4-98ff-4826-a952-54adfb9656e1" path="/var/lib/kubelet/pods/dcb878d4-98ff-4826-a952-54adfb9656e1/volumes" Oct 02 11:46:25 crc kubenswrapper[4725]: I1002 11:46:25.354170 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:46:25 crc kubenswrapper[4725]: I1002 11:46:25.445238 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qmc5z"] Oct 02 11:46:25 crc kubenswrapper[4725]: I1002 11:46:25.445473 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" podUID="c80ad4a4-1047-472a-b39f-96ccebce9c00" containerName="dnsmasq-dns" containerID="cri-o://6789ec144c802a83a978ecc4f56811e6415e0646439ba8b5c86c89e571c8c2fc" gracePeriod=10 Oct 02 11:46:26 crc kubenswrapper[4725]: I1002 11:46:26.335537 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" podUID="c80ad4a4-1047-472a-b39f-96ccebce9c00" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Oct 02 11:46:26 crc kubenswrapper[4725]: I1002 11:46:26.446487 4725 generic.go:334] "Generic (PLEG): container finished" podID="c80ad4a4-1047-472a-b39f-96ccebce9c00" containerID="6789ec144c802a83a978ecc4f56811e6415e0646439ba8b5c86c89e571c8c2fc" exitCode=0 Oct 02 11:46:26 crc kubenswrapper[4725]: I1002 11:46:26.446537 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" event={"ID":"c80ad4a4-1047-472a-b39f-96ccebce9c00","Type":"ContainerDied","Data":"6789ec144c802a83a978ecc4f56811e6415e0646439ba8b5c86c89e571c8c2fc"} Oct 02 11:46:29 crc kubenswrapper[4725]: E1002 11:46:29.916018 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 02 11:46:29 crc kubenswrapper[4725]: E1002 11:46:29.916997 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf5h544hc8h94hd8hf6hc7h644hc6hbfh5b8h98hfch56dh656h57fh664h5bbh5b5hdbh549h64bh55bh58h664h578hdbh677h699hb4h655h548q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4f8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-76cc7bf9c-g27mj_openstack(be6de048-7002-4f87-9e87-14af9e141c8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 11:46:29 crc kubenswrapper[4725]: E1002 11:46:29.919251 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-76cc7bf9c-g27mj" podUID="be6de048-7002-4f87-9e87-14af9e141c8b" Oct 02 11:46:29 crc kubenswrapper[4725]: E1002 11:46:29.929158 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 02 11:46:29 crc kubenswrapper[4725]: E1002 11:46:29.929849 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n689h56ch68dhc4h5d7h5b5h98h544h66bh6bh7hbbh6bh55fh59bh58fh595h68hcfh58ch5d5h5cdh5b5h57fhffh5c7h667h554h58h5bdh594h66fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69gzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-8647496bdc-w289x_openstack(eca38056-f65f-4d74-b0d2-258e660bba10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 11:46:29 crc kubenswrapper[4725]: E1002 11:46:29.932030 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-8647496bdc-w289x" podUID="eca38056-f65f-4d74-b0d2-258e660bba10" Oct 02 11:46:29 crc kubenswrapper[4725]: I1002 11:46:29.996229 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7382-account-create-kfv6s" Oct 02 11:46:30 crc kubenswrapper[4725]: I1002 11:46:30.109557 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6q59\" (UniqueName: \"kubernetes.io/projected/4a8b387f-3df2-4803-b785-99815cea430a-kube-api-access-d6q59\") pod \"4a8b387f-3df2-4803-b785-99815cea430a\" (UID: \"4a8b387f-3df2-4803-b785-99815cea430a\") " Oct 02 11:46:30 crc kubenswrapper[4725]: I1002 11:46:30.113517 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8b387f-3df2-4803-b785-99815cea430a-kube-api-access-d6q59" (OuterVolumeSpecName: "kube-api-access-d6q59") pod "4a8b387f-3df2-4803-b785-99815cea430a" (UID: "4a8b387f-3df2-4803-b785-99815cea430a"). InnerVolumeSpecName "kube-api-access-d6q59". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:30 crc kubenswrapper[4725]: I1002 11:46:30.211287 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6q59\" (UniqueName: \"kubernetes.io/projected/4a8b387f-3df2-4803-b785-99815cea430a-kube-api-access-d6q59\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:30 crc kubenswrapper[4725]: I1002 11:46:30.487134 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7382-account-create-kfv6s" event={"ID":"4a8b387f-3df2-4803-b785-99815cea430a","Type":"ContainerDied","Data":"75292678e0c833a0f2a07f48b3cd02eb4c91b09de34b7575b81fd5f2a622c54e"} Oct 02 11:46:30 crc kubenswrapper[4725]: I1002 11:46:30.487187 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75292678e0c833a0f2a07f48b3cd02eb4c91b09de34b7575b81fd5f2a622c54e" Oct 02 11:46:30 crc kubenswrapper[4725]: I1002 11:46:30.487624 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7382-account-create-kfv6s" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.335335 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" podUID="c80ad4a4-1047-472a-b39f-96ccebce9c00" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.424286 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kk9t6"] Oct 02 11:46:31 crc kubenswrapper[4725]: E1002 11:46:31.424690 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a8b387f-3df2-4803-b785-99815cea430a" containerName="mariadb-account-create" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.424713 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8b387f-3df2-4803-b785-99815cea430a" containerName="mariadb-account-create" Oct 02 11:46:31 crc kubenswrapper[4725]: E1002 11:46:31.424781 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb878d4-98ff-4826-a952-54adfb9656e1" containerName="init" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.424792 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb878d4-98ff-4826-a952-54adfb9656e1" containerName="init" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.424995 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a8b387f-3df2-4803-b785-99815cea430a" containerName="mariadb-account-create" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.425020 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb878d4-98ff-4826-a952-54adfb9656e1" containerName="init" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.425685 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.428654 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.434454 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kk9t6"] Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.437610 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4rskq" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.537449 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75kt9\" (UniqueName: \"kubernetes.io/projected/dff01db3-af7c-4b84-8258-482f19a0a330-kube-api-access-75kt9\") pod \"barbican-db-sync-kk9t6\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.537542 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-db-sync-config-data\") pod \"barbican-db-sync-kk9t6\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.537673 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-combined-ca-bundle\") pod \"barbican-db-sync-kk9t6\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.639901 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75kt9\" (UniqueName: \"kubernetes.io/projected/dff01db3-af7c-4b84-8258-482f19a0a330-kube-api-access-75kt9\") pod \"barbican-db-sync-kk9t6\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.640019 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-db-sync-config-data\") pod \"barbican-db-sync-kk9t6\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.640063 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-combined-ca-bundle\") pod \"barbican-db-sync-kk9t6\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.648573 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-combined-ca-bundle\") pod \"barbican-db-sync-kk9t6\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.652620 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-db-sync-config-data\") pod \"barbican-db-sync-kk9t6\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.656611 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75kt9\" (UniqueName: \"kubernetes.io/projected/dff01db3-af7c-4b84-8258-482f19a0a330-kube-api-access-75kt9\") pod \"barbican-db-sync-kk9t6\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:31 crc kubenswrapper[4725]: I1002 11:46:31.744239 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:32 crc kubenswrapper[4725]: E1002 11:46:32.116238 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Oct 02 11:46:32 crc kubenswrapper[4725]: E1002 11:46:32.116657 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2qwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-q2mkt_openstack(050130d8-978e-40e2-9869-ffdbcf50da81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 11:46:32 crc kubenswrapper[4725]: E1002 11:46:32.118696 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-q2mkt" podUID="050130d8-978e-40e2-9869-ffdbcf50da81" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.191692 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a7dd-account-create-7lhl9" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.254059 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.293552 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.293605 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.324478 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.353567 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-credential-keys\") pod \"e7ef6588-03db-4feb-b20d-78a45ace6749\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.353663 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-combined-ca-bundle\") pod \"e7ef6588-03db-4feb-b20d-78a45ace6749\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.353700 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-fernet-keys\") pod \"e7ef6588-03db-4feb-b20d-78a45ace6749\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.353775 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28mbk\" (UniqueName: \"kubernetes.io/projected/d384f0dd-dd01-4a5c-89d2-7c5981ad434d-kube-api-access-28mbk\") pod \"d384f0dd-dd01-4a5c-89d2-7c5981ad434d\" (UID: \"d384f0dd-dd01-4a5c-89d2-7c5981ad434d\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.353825 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-scripts\") pod \"e7ef6588-03db-4feb-b20d-78a45ace6749\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.353892 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qpsz\" (UniqueName: \"kubernetes.io/projected/e7ef6588-03db-4feb-b20d-78a45ace6749-kube-api-access-4qpsz\") pod \"e7ef6588-03db-4feb-b20d-78a45ace6749\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.354009 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-config-data\") pod \"e7ef6588-03db-4feb-b20d-78a45ace6749\" (UID: \"e7ef6588-03db-4feb-b20d-78a45ace6749\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.360848 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e7ef6588-03db-4feb-b20d-78a45ace6749" (UID: "e7ef6588-03db-4feb-b20d-78a45ace6749"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.369500 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-scripts" (OuterVolumeSpecName: "scripts") pod "e7ef6588-03db-4feb-b20d-78a45ace6749" (UID: "e7ef6588-03db-4feb-b20d-78a45ace6749"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.387935 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e7ef6588-03db-4feb-b20d-78a45ace6749" (UID: "e7ef6588-03db-4feb-b20d-78a45ace6749"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.388061 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7ef6588-03db-4feb-b20d-78a45ace6749-kube-api-access-4qpsz" (OuterVolumeSpecName: "kube-api-access-4qpsz") pod "e7ef6588-03db-4feb-b20d-78a45ace6749" (UID: "e7ef6588-03db-4feb-b20d-78a45ace6749"). InnerVolumeSpecName "kube-api-access-4qpsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.392244 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d384f0dd-dd01-4a5c-89d2-7c5981ad434d-kube-api-access-28mbk" (OuterVolumeSpecName: "kube-api-access-28mbk") pod "d384f0dd-dd01-4a5c-89d2-7c5981ad434d" (UID: "d384f0dd-dd01-4a5c-89d2-7c5981ad434d"). InnerVolumeSpecName "kube-api-access-28mbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.412548 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-config-data" (OuterVolumeSpecName: "config-data") pod "e7ef6588-03db-4feb-b20d-78a45ace6749" (UID: "e7ef6588-03db-4feb-b20d-78a45ace6749"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.422647 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7ef6588-03db-4feb-b20d-78a45ace6749" (UID: "e7ef6588-03db-4feb-b20d-78a45ace6749"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457319 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-config-data\") pod \"eca38056-f65f-4d74-b0d2-258e660bba10\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457369 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/eca38056-f65f-4d74-b0d2-258e660bba10-horizon-secret-key\") pod \"eca38056-f65f-4d74-b0d2-258e660bba10\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457421 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4f8w\" (UniqueName: \"kubernetes.io/projected/be6de048-7002-4f87-9e87-14af9e141c8b-kube-api-access-n4f8w\") pod \"be6de048-7002-4f87-9e87-14af9e141c8b\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457455 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69gzg\" (UniqueName: \"kubernetes.io/projected/eca38056-f65f-4d74-b0d2-258e660bba10-kube-api-access-69gzg\") pod \"eca38056-f65f-4d74-b0d2-258e660bba10\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457499 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-config-data\") pod \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457525 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-config-data\") pod \"be6de048-7002-4f87-9e87-14af9e141c8b\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457587 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-httpd-run\") pod \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457636 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-scripts\") pod \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457672 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-scripts\") pod \"eca38056-f65f-4d74-b0d2-258e660bba10\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457748 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be6de048-7002-4f87-9e87-14af9e141c8b-horizon-secret-key\") pod \"be6de048-7002-4f87-9e87-14af9e141c8b\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457779 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca38056-f65f-4d74-b0d2-258e660bba10-logs\") pod \"eca38056-f65f-4d74-b0d2-258e660bba10\" (UID: \"eca38056-f65f-4d74-b0d2-258e660bba10\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457813 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-scripts\") pod \"be6de048-7002-4f87-9e87-14af9e141c8b\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.457862 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvqxr\" (UniqueName: \"kubernetes.io/projected/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-kube-api-access-pvqxr\") pod \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.458578 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-logs\") pod \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.458622 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-combined-ca-bundle\") pod \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.458201 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-scripts" (OuterVolumeSpecName: "scripts") pod "eca38056-f65f-4d74-b0d2-258e660bba10" (UID: "eca38056-f65f-4d74-b0d2-258e660bba10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.458428 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-config-data" (OuterVolumeSpecName: "config-data") pod "eca38056-f65f-4d74-b0d2-258e660bba10" (UID: "eca38056-f65f-4d74-b0d2-258e660bba10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.458461 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" (UID: "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.458619 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-scripts" (OuterVolumeSpecName: "scripts") pod "be6de048-7002-4f87-9e87-14af9e141c8b" (UID: "be6de048-7002-4f87-9e87-14af9e141c8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.458829 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eca38056-f65f-4d74-b0d2-258e660bba10-logs" (OuterVolumeSpecName: "logs") pod "eca38056-f65f-4d74-b0d2-258e660bba10" (UID: "eca38056-f65f-4d74-b0d2-258e660bba10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.458687 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\" (UID: \"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.459485 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-config-data" (OuterVolumeSpecName: "config-data") pod "be6de048-7002-4f87-9e87-14af9e141c8b" (UID: "be6de048-7002-4f87-9e87-14af9e141c8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.459572 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be6de048-7002-4f87-9e87-14af9e141c8b-logs\") pod \"be6de048-7002-4f87-9e87-14af9e141c8b\" (UID: \"be6de048-7002-4f87-9e87-14af9e141c8b\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460709 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460767 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460779 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qpsz\" (UniqueName: \"kubernetes.io/projected/e7ef6588-03db-4feb-b20d-78a45ace6749-kube-api-access-4qpsz\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460789 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460799 4725 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460809 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460817 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eca38056-f65f-4d74-b0d2-258e660bba10-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460825 4725 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460834 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca38056-f65f-4d74-b0d2-258e660bba10-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460842 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be6de048-7002-4f87-9e87-14af9e141c8b-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460850 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460858 4725 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7ef6588-03db-4feb-b20d-78a45ace6749-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.460867 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28mbk\" (UniqueName: \"kubernetes.io/projected/d384f0dd-dd01-4a5c-89d2-7c5981ad434d-kube-api-access-28mbk\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.461075 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-logs" (OuterVolumeSpecName: "logs") pod "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" (UID: "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.464272 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be6de048-7002-4f87-9e87-14af9e141c8b-logs" (OuterVolumeSpecName: "logs") pod "be6de048-7002-4f87-9e87-14af9e141c8b" (UID: "be6de048-7002-4f87-9e87-14af9e141c8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.465711 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca38056-f65f-4d74-b0d2-258e660bba10-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "eca38056-f65f-4d74-b0d2-258e660bba10" (UID: "eca38056-f65f-4d74-b0d2-258e660bba10"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.474436 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6de048-7002-4f87-9e87-14af9e141c8b-kube-api-access-n4f8w" (OuterVolumeSpecName: "kube-api-access-n4f8w") pod "be6de048-7002-4f87-9e87-14af9e141c8b" (UID: "be6de048-7002-4f87-9e87-14af9e141c8b"). InnerVolumeSpecName "kube-api-access-n4f8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.476963 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6de048-7002-4f87-9e87-14af9e141c8b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "be6de048-7002-4f87-9e87-14af9e141c8b" (UID: "be6de048-7002-4f87-9e87-14af9e141c8b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.477007 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-scripts" (OuterVolumeSpecName: "scripts") pod "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" (UID: "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.477004 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-kube-api-access-pvqxr" (OuterVolumeSpecName: "kube-api-access-pvqxr") pod "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" (UID: "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc"). InnerVolumeSpecName "kube-api-access-pvqxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.477042 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca38056-f65f-4d74-b0d2-258e660bba10-kube-api-access-69gzg" (OuterVolumeSpecName: "kube-api-access-69gzg") pod "eca38056-f65f-4d74-b0d2-258e660bba10" (UID: "eca38056-f65f-4d74-b0d2-258e660bba10"). InnerVolumeSpecName "kube-api-access-69gzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.477713 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" (UID: "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.507606 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" (UID: "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.515245 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a7dd-account-create-7lhl9" event={"ID":"d384f0dd-dd01-4a5c-89d2-7c5981ad434d","Type":"ContainerDied","Data":"16d3076985e43d953351db97f87b45a9197bd1183631292bea966fba8f0850a3"} Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.515302 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d3076985e43d953351db97f87b45a9197bd1183631292bea966fba8f0850a3" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.515303 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a7dd-account-create-7lhl9" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.532761 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-daa4-account-create-gbmhh" event={"ID":"480d8d51-eb78-4bce-8f31-0ff29e6ac822","Type":"ContainerStarted","Data":"12f031825f2811221fd0aaf7c85c72319c391f08732db59e3c5020d4ed16b7f9"} Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.537775 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8647496bdc-w289x" event={"ID":"eca38056-f65f-4d74-b0d2-258e660bba10","Type":"ContainerDied","Data":"64fc8c882bc2eafd02bd4c54c9f2f468c1ded2da6a5746badd79aea644570e6a"} Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.537797 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8647496bdc-w289x" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.541141 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9rkq" event={"ID":"e7ef6588-03db-4feb-b20d-78a45ace6749","Type":"ContainerDied","Data":"96dd9c4ec668ec41c16027c6f37231a060cc624814a274ba37d1e3132d2b3edc"} Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.541186 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96dd9c4ec668ec41c16027c6f37231a060cc624814a274ba37d1e3132d2b3edc" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.541267 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9rkq" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.551532 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76cc7bf9c-g27mj" event={"ID":"be6de048-7002-4f87-9e87-14af9e141c8b","Type":"ContainerDied","Data":"c222f21676b0543148e483b56a7ffe7216b85f77ec377ab66cf2d0b90d909c2e"} Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.551744 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76cc7bf9c-g27mj" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.553657 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.555788 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-daa4-account-create-gbmhh" podStartSLOduration=21.555777294 podStartE2EDuration="21.555777294s" podCreationTimestamp="2025-10-02 11:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:32.547906037 +0000 UTC m=+1112.455405500" watchObservedRunningTime="2025-10-02 11:46:32.555777294 +0000 UTC m=+1112.463276757" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.557778 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af83e08a-34ac-4ba5-9246-d6bb2a0cecbc","Type":"ContainerDied","Data":"ef7ebed5097a27d2f2d2ae5a908a4b377d942249a1e26f728931577eb038f2ba"} Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.557857 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.557879 4725 scope.go:117] "RemoveContainer" containerID="a49788d91b3917667323db768960d4051702c541aff145d2b41119aea500f1a5" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.562065 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.562288 4725 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be6de048-7002-4f87-9e87-14af9e141c8b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.562304 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvqxr\" (UniqueName: \"kubernetes.io/projected/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-kube-api-access-pvqxr\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.562317 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.562328 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.562362 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.562376 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be6de048-7002-4f87-9e87-14af9e141c8b-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.562388 4725 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/eca38056-f65f-4d74-b0d2-258e660bba10-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.562402 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4f8w\" (UniqueName: \"kubernetes.io/projected/be6de048-7002-4f87-9e87-14af9e141c8b-kube-api-access-n4f8w\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.562415 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69gzg\" (UniqueName: \"kubernetes.io/projected/eca38056-f65f-4d74-b0d2-258e660bba10-kube-api-access-69gzg\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.572621 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-config-data" (OuterVolumeSpecName: "config-data") pod "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" (UID: "af83e08a-34ac-4ba5-9246-d6bb2a0cecbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.615243 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.656882 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74988cc86b-c7lcm"] Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.663451 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnrdt\" (UniqueName: \"kubernetes.io/projected/c80ad4a4-1047-472a-b39f-96ccebce9c00-kube-api-access-vnrdt\") pod \"c80ad4a4-1047-472a-b39f-96ccebce9c00\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.663659 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-config\") pod \"c80ad4a4-1047-472a-b39f-96ccebce9c00\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.663705 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-swift-storage-0\") pod \"c80ad4a4-1047-472a-b39f-96ccebce9c00\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.663881 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-sb\") pod \"c80ad4a4-1047-472a-b39f-96ccebce9c00\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.663948 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-svc\") pod \"c80ad4a4-1047-472a-b39f-96ccebce9c00\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.663970 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-nb\") pod \"c80ad4a4-1047-472a-b39f-96ccebce9c00\" (UID: \"c80ad4a4-1047-472a-b39f-96ccebce9c00\") " Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.664523 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.664548 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.672971 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80ad4a4-1047-472a-b39f-96ccebce9c00-kube-api-access-vnrdt" (OuterVolumeSpecName: "kube-api-access-vnrdt") pod "c80ad4a4-1047-472a-b39f-96ccebce9c00" (UID: "c80ad4a4-1047-472a-b39f-96ccebce9c00"). InnerVolumeSpecName "kube-api-access-vnrdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: E1002 11:46:32.673201 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-q2mkt" podUID="050130d8-978e-40e2-9869-ffdbcf50da81" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.673493 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8647496bdc-w289x"] Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.685829 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8647496bdc-w289x"] Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.695197 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b797cdcc6-7cf2m"] Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.695984 4725 scope.go:117] "RemoveContainer" containerID="7e63f66f70d06cb45f66a7ab8160680fe814c10c2707bfd4ccaf9045de1513f7" Oct 02 11:46:32 crc kubenswrapper[4725]: W1002 11:46:32.717637 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcc08588_f9a7_4bb5_bd05_eb7e2e5738a4.slice/crio-59bb826973d6608d05797c5c7294ef297e577edffb19819a01cafe449f38cb68 WatchSource:0}: Error finding container 59bb826973d6608d05797c5c7294ef297e577edffb19819a01cafe449f38cb68: Status 404 returned error can't find the container with id 59bb826973d6608d05797c5c7294ef297e577edffb19819a01cafe449f38cb68 Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.736883 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76cc7bf9c-g27mj"] Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.747348 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76cc7bf9c-g27mj"] Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.769670 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnrdt\" (UniqueName: \"kubernetes.io/projected/c80ad4a4-1047-472a-b39f-96ccebce9c00-kube-api-access-vnrdt\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.793748 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kk9t6"] Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.798790 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-config" (OuterVolumeSpecName: "config") pod "c80ad4a4-1047-472a-b39f-96ccebce9c00" (UID: "c80ad4a4-1047-472a-b39f-96ccebce9c00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.808215 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c80ad4a4-1047-472a-b39f-96ccebce9c00" (UID: "c80ad4a4-1047-472a-b39f-96ccebce9c00"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.815252 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c80ad4a4-1047-472a-b39f-96ccebce9c00" (UID: "c80ad4a4-1047-472a-b39f-96ccebce9c00"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.818271 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c80ad4a4-1047-472a-b39f-96ccebce9c00" (UID: "c80ad4a4-1047-472a-b39f-96ccebce9c00"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.821317 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c80ad4a4-1047-472a-b39f-96ccebce9c00" (UID: "c80ad4a4-1047-472a-b39f-96ccebce9c00"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.871250 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.871661 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.871676 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.871685 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:32 crc kubenswrapper[4725]: I1002 11:46:32.871694 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c80ad4a4-1047-472a-b39f-96ccebce9c00-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.005816 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.013973 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.029300 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:46:33 crc kubenswrapper[4725]: E1002 11:46:33.029665 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d384f0dd-dd01-4a5c-89d2-7c5981ad434d" containerName="mariadb-account-create" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.029683 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d384f0dd-dd01-4a5c-89d2-7c5981ad434d" containerName="mariadb-account-create" Oct 02 11:46:33 crc kubenswrapper[4725]: E1002 11:46:33.029694 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80ad4a4-1047-472a-b39f-96ccebce9c00" containerName="dnsmasq-dns" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.029701 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80ad4a4-1047-472a-b39f-96ccebce9c00" containerName="dnsmasq-dns" Oct 02 11:46:33 crc kubenswrapper[4725]: E1002 11:46:33.029713 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80ad4a4-1047-472a-b39f-96ccebce9c00" containerName="init" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.029719 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80ad4a4-1047-472a-b39f-96ccebce9c00" containerName="init" Oct 02 11:46:33 crc kubenswrapper[4725]: E1002 11:46:33.029755 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" containerName="glance-httpd" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.029760 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" containerName="glance-httpd" Oct 02 11:46:33 crc kubenswrapper[4725]: E1002 11:46:33.029769 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" containerName="glance-log" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.029776 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" containerName="glance-log" Oct 02 11:46:33 crc kubenswrapper[4725]: E1002 11:46:33.029793 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ef6588-03db-4feb-b20d-78a45ace6749" containerName="keystone-bootstrap" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.029799 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ef6588-03db-4feb-b20d-78a45ace6749" containerName="keystone-bootstrap" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.033947 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80ad4a4-1047-472a-b39f-96ccebce9c00" containerName="dnsmasq-dns" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.033987 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" containerName="glance-httpd" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.033994 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d384f0dd-dd01-4a5c-89d2-7c5981ad434d" containerName="mariadb-account-create" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.034005 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" containerName="glance-log" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.034013 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7ef6588-03db-4feb-b20d-78a45ace6749" containerName="keystone-bootstrap" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.034991 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.036893 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.037404 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.053037 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.175917 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.175974 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.176003 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.176070 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-logs\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.176111 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.176139 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.176251 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.176426 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8sck\" (UniqueName: \"kubernetes.io/projected/0e543110-c94c-4470-9114-1d776fba2216-kube-api-access-l8sck\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.278118 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.278197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8sck\" (UniqueName: \"kubernetes.io/projected/0e543110-c94c-4470-9114-1d776fba2216-kube-api-access-l8sck\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.278270 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.278321 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.278353 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.278426 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-logs\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.278470 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.278499 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.278775 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.288594 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-logs\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.290087 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.294398 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af83e08a-34ac-4ba5-9246-d6bb2a0cecbc" path="/var/lib/kubelet/pods/af83e08a-34ac-4ba5-9246-d6bb2a0cecbc/volumes" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.295561 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be6de048-7002-4f87-9e87-14af9e141c8b" path="/var/lib/kubelet/pods/be6de048-7002-4f87-9e87-14af9e141c8b/volumes" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.296239 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca38056-f65f-4d74-b0d2-258e660bba10" path="/var/lib/kubelet/pods/eca38056-f65f-4d74-b0d2-258e660bba10/volumes" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.301052 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.305192 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.308230 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.308401 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.313539 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8sck\" (UniqueName: \"kubernetes.io/projected/0e543110-c94c-4470-9114-1d776fba2216-kube-api-access-l8sck\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.342216 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-w9rkq"] Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.349969 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-w9rkq"] Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.353593 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.464439 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zmk95"] Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.467588 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.477663 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.477741 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.477857 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zwkvx" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.481387 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zmk95"] Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.485027 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.568249 4725 generic.go:334] "Generic (PLEG): container finished" podID="480d8d51-eb78-4bce-8f31-0ff29e6ac822" containerID="12f031825f2811221fd0aaf7c85c72319c391f08732db59e3c5020d4ed16b7f9" exitCode=0 Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.568763 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-daa4-account-create-gbmhh" event={"ID":"480d8d51-eb78-4bce-8f31-0ff29e6ac822","Type":"ContainerDied","Data":"12f031825f2811221fd0aaf7c85c72319c391f08732db59e3c5020d4ed16b7f9"} Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.570055 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74988cc86b-c7lcm" event={"ID":"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4","Type":"ContainerStarted","Data":"8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968"} Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.570077 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74988cc86b-c7lcm" event={"ID":"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4","Type":"ContainerStarted","Data":"59bb826973d6608d05797c5c7294ef297e577edffb19819a01cafe449f38cb68"} Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.571936 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b797cdcc6-7cf2m" event={"ID":"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8","Type":"ContainerStarted","Data":"7a0c35eeec1c5800ddc4733d79f39d1b9a1a0531118ecc185f30e8ae8883b873"} Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.571957 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b797cdcc6-7cf2m" event={"ID":"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8","Type":"ContainerStarted","Data":"1337680e2939429456f4d5c5822f6356115d2108518da33e5ecfff5abc8963c0"} Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.572742 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kk9t6" event={"ID":"dff01db3-af7c-4b84-8258-482f19a0a330","Type":"ContainerStarted","Data":"6dac7d17d8be09301f71ef9b99a03ab63f628c071a38e11c94d008e2e350d2f7"} Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.574361 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" event={"ID":"c80ad4a4-1047-472a-b39f-96ccebce9c00","Type":"ContainerDied","Data":"d6757bce8b8f87ed6b3d1b44afe7c182fd985407fce65df5e84e8c4d975f2ea0"} Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.574396 4725 scope.go:117] "RemoveContainer" containerID="6789ec144c802a83a978ecc4f56811e6415e0646439ba8b5c86c89e571c8c2fc" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.574755 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-qmc5z" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.575973 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7595c47df5-46cnx" event={"ID":"82823ab6-7a43-44da-8ad3-33edd043d777","Type":"ContainerStarted","Data":"a54d08ff4322dbde0471d94a6766cee77c5d56c5dc42402c5afe77a28c36cec6"} Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.578054 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc23aa02-f528-4674-adef-8c0793ac184d","Type":"ContainerStarted","Data":"74e3157b02fa4dce33ec3c3a0a9a34c77e14cfe774d1ae4275c74df1b52a1d78"} Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.582570 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61ba599e-0338-4e5a-9009-a7a5a4f66747","Type":"ContainerStarted","Data":"9ed9eece27a460860df25c18c63e21c60d5b912916fc3cf73c40179dd85f487f"} Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.582710 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="61ba599e-0338-4e5a-9009-a7a5a4f66747" containerName="glance-log" containerID="cri-o://a558edd5103a0fc5170ff5551f41cb94cbb3f99a24556655f28a980e032fd318" gracePeriod=30 Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.582790 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="61ba599e-0338-4e5a-9009-a7a5a4f66747" containerName="glance-httpd" containerID="cri-o://9ed9eece27a460860df25c18c63e21c60d5b912916fc3cf73c40179dd85f487f" gracePeriod=30 Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.586588 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-credential-keys\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.586658 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-fernet-keys\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.586689 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-scripts\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.587412 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-config-data\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.587505 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-combined-ca-bundle\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.587570 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4ls\" (UniqueName: \"kubernetes.io/projected/588ebf73-67b1-4662-b4cf-d51123a49937-kube-api-access-xz4ls\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.608592 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qmc5z"] Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.617538 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qmc5z"] Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.637105 4725 scope.go:117] "RemoveContainer" containerID="313fd03142575c990b2e4810b0f139c1ed68b54a82742007d8dd7e07c39cc312" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.652864 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.701856 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-config-data\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.701931 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-combined-ca-bundle\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.701963 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz4ls\" (UniqueName: \"kubernetes.io/projected/588ebf73-67b1-4662-b4cf-d51123a49937-kube-api-access-xz4ls\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.701995 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-credential-keys\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.702022 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-fernet-keys\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.702043 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-scripts\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.708634 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-fernet-keys\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.709406 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-scripts\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.709437 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-config-data\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.712536 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-credential-keys\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.713471 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-combined-ca-bundle\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.723509 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz4ls\" (UniqueName: \"kubernetes.io/projected/588ebf73-67b1-4662-b4cf-d51123a49937-kube-api-access-xz4ls\") pod \"keystone-bootstrap-zmk95\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:33 crc kubenswrapper[4725]: I1002 11:46:33.818767 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.209098 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=25.209079475 podStartE2EDuration="25.209079475s" podCreationTimestamp="2025-10-02 11:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:33.636963327 +0000 UTC m=+1113.544462790" watchObservedRunningTime="2025-10-02 11:46:34.209079475 +0000 UTC m=+1114.116578938" Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.212271 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:46:34 crc kubenswrapper[4725]: W1002 11:46:34.216893 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e543110_c94c_4470_9114_1d776fba2216.slice/crio-ff95e27fd796ef34d52eeabd8c396650357a29d0d2e15ef6a4656b4c0f86ad98 WatchSource:0}: Error finding container ff95e27fd796ef34d52eeabd8c396650357a29d0d2e15ef6a4656b4c0f86ad98: Status 404 returned error can't find the container with id ff95e27fd796ef34d52eeabd8c396650357a29d0d2e15ef6a4656b4c0f86ad98 Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.337326 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zmk95"] Oct 02 11:46:34 crc kubenswrapper[4725]: W1002 11:46:34.345148 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod588ebf73_67b1_4662_b4cf_d51123a49937.slice/crio-93b28e7fd91a474db30f7e4af0426ebf5dc076c05a11b206b6529729197cf05c WatchSource:0}: Error finding container 93b28e7fd91a474db30f7e4af0426ebf5dc076c05a11b206b6529729197cf05c: Status 404 returned error can't find the container with id 93b28e7fd91a474db30f7e4af0426ebf5dc076c05a11b206b6529729197cf05c Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.595147 4725 generic.go:334] "Generic (PLEG): container finished" podID="61ba599e-0338-4e5a-9009-a7a5a4f66747" containerID="9ed9eece27a460860df25c18c63e21c60d5b912916fc3cf73c40179dd85f487f" exitCode=0 Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.595445 4725 generic.go:334] "Generic (PLEG): container finished" podID="61ba599e-0338-4e5a-9009-a7a5a4f66747" containerID="a558edd5103a0fc5170ff5551f41cb94cbb3f99a24556655f28a980e032fd318" exitCode=143 Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.595495 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61ba599e-0338-4e5a-9009-a7a5a4f66747","Type":"ContainerDied","Data":"9ed9eece27a460860df25c18c63e21c60d5b912916fc3cf73c40179dd85f487f"} Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.595528 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61ba599e-0338-4e5a-9009-a7a5a4f66747","Type":"ContainerDied","Data":"a558edd5103a0fc5170ff5551f41cb94cbb3f99a24556655f28a980e032fd318"} Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.598809 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74988cc86b-c7lcm" event={"ID":"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4","Type":"ContainerStarted","Data":"1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1"} Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.603655 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7595c47df5-46cnx" event={"ID":"82823ab6-7a43-44da-8ad3-33edd043d777","Type":"ContainerStarted","Data":"f381abffb8ca468621947fdc830922a27cebd5b407ea42239b2e0faebdc4f930"} Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.603845 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7595c47df5-46cnx" podUID="82823ab6-7a43-44da-8ad3-33edd043d777" containerName="horizon-log" containerID="cri-o://a54d08ff4322dbde0471d94a6766cee77c5d56c5dc42402c5afe77a28c36cec6" gracePeriod=30 Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.604153 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7595c47df5-46cnx" podUID="82823ab6-7a43-44da-8ad3-33edd043d777" containerName="horizon" containerID="cri-o://f381abffb8ca468621947fdc830922a27cebd5b407ea42239b2e0faebdc4f930" gracePeriod=30 Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.614893 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e543110-c94c-4470-9114-1d776fba2216","Type":"ContainerStarted","Data":"ff95e27fd796ef34d52eeabd8c396650357a29d0d2e15ef6a4656b4c0f86ad98"} Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.622408 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74988cc86b-c7lcm" podStartSLOduration=14.622390004 podStartE2EDuration="14.622390004s" podCreationTimestamp="2025-10-02 11:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:34.620743331 +0000 UTC m=+1114.528242794" watchObservedRunningTime="2025-10-02 11:46:34.622390004 +0000 UTC m=+1114.529889477" Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.622592 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b797cdcc6-7cf2m" event={"ID":"9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8","Type":"ContainerStarted","Data":"f1d0df3aacf366cd6cbda723d389edf759b6c893c63bd04c40f6a0f6e1a0ea9b"} Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.625541 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zmk95" event={"ID":"588ebf73-67b1-4662-b4cf-d51123a49937","Type":"ContainerStarted","Data":"93b28e7fd91a474db30f7e4af0426ebf5dc076c05a11b206b6529729197cf05c"} Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.652083 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7595c47df5-46cnx" podStartSLOduration=11.567602401 podStartE2EDuration="21.652064177s" podCreationTimestamp="2025-10-02 11:46:13 +0000 UTC" firstStartedPulling="2025-10-02 11:46:22.622595108 +0000 UTC m=+1102.530094571" lastFinishedPulling="2025-10-02 11:46:32.707056874 +0000 UTC m=+1112.614556347" observedRunningTime="2025-10-02 11:46:34.640274936 +0000 UTC m=+1114.547774399" watchObservedRunningTime="2025-10-02 11:46:34.652064177 +0000 UTC m=+1114.559563640" Oct 02 11:46:34 crc kubenswrapper[4725]: I1002 11:46:34.677800 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b797cdcc6-7cf2m" podStartSLOduration=14.677778946 podStartE2EDuration="14.677778946s" podCreationTimestamp="2025-10-02 11:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:34.674406686 +0000 UTC m=+1114.581906159" watchObservedRunningTime="2025-10-02 11:46:34.677778946 +0000 UTC m=+1114.585278409" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.011901 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daa4-account-create-gbmhh" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.124863 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll259\" (UniqueName: \"kubernetes.io/projected/480d8d51-eb78-4bce-8f31-0ff29e6ac822-kube-api-access-ll259\") pod \"480d8d51-eb78-4bce-8f31-0ff29e6ac822\" (UID: \"480d8d51-eb78-4bce-8f31-0ff29e6ac822\") " Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.131309 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480d8d51-eb78-4bce-8f31-0ff29e6ac822-kube-api-access-ll259" (OuterVolumeSpecName: "kube-api-access-ll259") pod "480d8d51-eb78-4bce-8f31-0ff29e6ac822" (UID: "480d8d51-eb78-4bce-8f31-0ff29e6ac822"). InnerVolumeSpecName "kube-api-access-ll259". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.226800 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll259\" (UniqueName: \"kubernetes.io/projected/480d8d51-eb78-4bce-8f31-0ff29e6ac822-kube-api-access-ll259\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.236509 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.278054 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c80ad4a4-1047-472a-b39f-96ccebce9c00" path="/var/lib/kubelet/pods/c80ad4a4-1047-472a-b39f-96ccebce9c00/volumes" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.278621 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7ef6588-03db-4feb-b20d-78a45ace6749" path="/var/lib/kubelet/pods/e7ef6588-03db-4feb-b20d-78a45ace6749/volumes" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.430623 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85zkf\" (UniqueName: \"kubernetes.io/projected/61ba599e-0338-4e5a-9009-a7a5a4f66747-kube-api-access-85zkf\") pod \"61ba599e-0338-4e5a-9009-a7a5a4f66747\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.430676 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-config-data\") pod \"61ba599e-0338-4e5a-9009-a7a5a4f66747\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.430717 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-scripts\") pod \"61ba599e-0338-4e5a-9009-a7a5a4f66747\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.430832 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-httpd-run\") pod \"61ba599e-0338-4e5a-9009-a7a5a4f66747\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.430929 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-combined-ca-bundle\") pod \"61ba599e-0338-4e5a-9009-a7a5a4f66747\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.430954 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"61ba599e-0338-4e5a-9009-a7a5a4f66747\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.430990 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-logs\") pod \"61ba599e-0338-4e5a-9009-a7a5a4f66747\" (UID: \"61ba599e-0338-4e5a-9009-a7a5a4f66747\") " Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.431874 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-logs" (OuterVolumeSpecName: "logs") pod "61ba599e-0338-4e5a-9009-a7a5a4f66747" (UID: "61ba599e-0338-4e5a-9009-a7a5a4f66747"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.431893 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "61ba599e-0338-4e5a-9009-a7a5a4f66747" (UID: "61ba599e-0338-4e5a-9009-a7a5a4f66747"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.435233 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-scripts" (OuterVolumeSpecName: "scripts") pod "61ba599e-0338-4e5a-9009-a7a5a4f66747" (UID: "61ba599e-0338-4e5a-9009-a7a5a4f66747"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.435464 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "61ba599e-0338-4e5a-9009-a7a5a4f66747" (UID: "61ba599e-0338-4e5a-9009-a7a5a4f66747"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.435528 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ba599e-0338-4e5a-9009-a7a5a4f66747-kube-api-access-85zkf" (OuterVolumeSpecName: "kube-api-access-85zkf") pod "61ba599e-0338-4e5a-9009-a7a5a4f66747" (UID: "61ba599e-0338-4e5a-9009-a7a5a4f66747"). InnerVolumeSpecName "kube-api-access-85zkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.453373 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61ba599e-0338-4e5a-9009-a7a5a4f66747" (UID: "61ba599e-0338-4e5a-9009-a7a5a4f66747"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.477586 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-config-data" (OuterVolumeSpecName: "config-data") pod "61ba599e-0338-4e5a-9009-a7a5a4f66747" (UID: "61ba599e-0338-4e5a-9009-a7a5a4f66747"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.537395 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.537450 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.537461 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.537470 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85zkf\" (UniqueName: \"kubernetes.io/projected/61ba599e-0338-4e5a-9009-a7a5a4f66747-kube-api-access-85zkf\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.537483 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.538572 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61ba599e-0338-4e5a-9009-a7a5a4f66747-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.538593 4725 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61ba599e-0338-4e5a-9009-a7a5a4f66747-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.555201 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.641363 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.700699 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e543110-c94c-4470-9114-1d776fba2216","Type":"ContainerStarted","Data":"34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16"} Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.736064 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zmk95" event={"ID":"588ebf73-67b1-4662-b4cf-d51123a49937","Type":"ContainerStarted","Data":"c9a356393159f5d7e114cb81d89851a23264f01e0461805e984aa6fa2426aef7"} Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.750995 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61ba599e-0338-4e5a-9009-a7a5a4f66747","Type":"ContainerDied","Data":"82a9a4a046f7c9a8784afd7eb5242052bf122d6e19ac84caee246e48087a89b8"} Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.751048 4725 scope.go:117] "RemoveContainer" containerID="9ed9eece27a460860df25c18c63e21c60d5b912916fc3cf73c40179dd85f487f" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.751157 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.762814 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-daa4-account-create-gbmhh" event={"ID":"480d8d51-eb78-4bce-8f31-0ff29e6ac822","Type":"ContainerDied","Data":"bc3c79c55e57c683d93935eb4e549f22b630bafc044eaf0d56586e417642e9c6"} Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.763109 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc3c79c55e57c683d93935eb4e549f22b630bafc044eaf0d56586e417642e9c6" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.763258 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daa4-account-create-gbmhh" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.770519 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zmk95" podStartSLOduration=2.770502992 podStartE2EDuration="2.770502992s" podCreationTimestamp="2025-10-02 11:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:35.769519026 +0000 UTC m=+1115.677018489" watchObservedRunningTime="2025-10-02 11:46:35.770502992 +0000 UTC m=+1115.678002455" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.787880 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.795200 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.805603 4725 scope.go:117] "RemoveContainer" containerID="a558edd5103a0fc5170ff5551f41cb94cbb3f99a24556655f28a980e032fd318" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.830986 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:46:35 crc kubenswrapper[4725]: E1002 11:46:35.831342 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480d8d51-eb78-4bce-8f31-0ff29e6ac822" containerName="mariadb-account-create" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.831353 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="480d8d51-eb78-4bce-8f31-0ff29e6ac822" containerName="mariadb-account-create" Oct 02 11:46:35 crc kubenswrapper[4725]: E1002 11:46:35.831376 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ba599e-0338-4e5a-9009-a7a5a4f66747" containerName="glance-log" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.831382 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ba599e-0338-4e5a-9009-a7a5a4f66747" containerName="glance-log" Oct 02 11:46:35 crc kubenswrapper[4725]: E1002 11:46:35.831396 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ba599e-0338-4e5a-9009-a7a5a4f66747" containerName="glance-httpd" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.831403 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ba599e-0338-4e5a-9009-a7a5a4f66747" containerName="glance-httpd" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.831555 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="480d8d51-eb78-4bce-8f31-0ff29e6ac822" containerName="mariadb-account-create" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.831565 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ba599e-0338-4e5a-9009-a7a5a4f66747" containerName="glance-log" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.831576 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ba599e-0338-4e5a-9009-a7a5a4f66747" containerName="glance-httpd" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.844412 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.863759 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.864175 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.879819 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.975940 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.976014 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.976097 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.976126 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-logs\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.976154 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ws6j\" (UniqueName: \"kubernetes.io/projected/59acc19b-4915-4b1c-b3cb-09ff04ccff58-kube-api-access-9ws6j\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.976191 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.976242 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:35 crc kubenswrapper[4725]: I1002 11:46:35.976291 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.077245 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.077292 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-logs\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.077316 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ws6j\" (UniqueName: \"kubernetes.io/projected/59acc19b-4915-4b1c-b3cb-09ff04ccff58-kube-api-access-9ws6j\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.077343 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.077391 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.077432 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.077454 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.077478 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.077902 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.078713 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.079068 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-logs\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.091400 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.091547 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.091676 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.098170 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.104888 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ws6j\" (UniqueName: \"kubernetes.io/projected/59acc19b-4915-4b1c-b3cb-09ff04ccff58-kube-api-access-9ws6j\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.116150 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.183742 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.340443 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-56t7s"] Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.342768 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.348278 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.348675 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.349694 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w7pj7" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.351227 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-56t7s"] Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.485622 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-scripts\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.485926 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/804fa613-f386-41a1-975e-835525211cb3-etc-machine-id\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.485950 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-db-sync-config-data\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.485981 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-config-data\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.486016 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwlj7\" (UniqueName: \"kubernetes.io/projected/804fa613-f386-41a1-975e-835525211cb3-kube-api-access-bwlj7\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.486068 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-combined-ca-bundle\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.589912 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-combined-ca-bundle\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.590119 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-scripts\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.590139 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/804fa613-f386-41a1-975e-835525211cb3-etc-machine-id\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.590163 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-db-sync-config-data\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.590217 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-config-data\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.590284 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwlj7\" (UniqueName: \"kubernetes.io/projected/804fa613-f386-41a1-975e-835525211cb3-kube-api-access-bwlj7\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.590860 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/804fa613-f386-41a1-975e-835525211cb3-etc-machine-id\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.596628 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-combined-ca-bundle\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.602191 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-scripts\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.603008 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-config-data\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.605576 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-db-sync-config-data\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.606235 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwlj7\" (UniqueName: \"kubernetes.io/projected/804fa613-f386-41a1-975e-835525211cb3-kube-api-access-bwlj7\") pod \"cinder-db-sync-56t7s\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.669242 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-56t7s" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.832452 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-7jgcd"] Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.833818 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.841615 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.841638 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gwhvn" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.841857 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.842309 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7jgcd"] Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.853517 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e543110-c94c-4470-9114-1d776fba2216","Type":"ContainerStarted","Data":"cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48"} Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.875162 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:46:36 crc kubenswrapper[4725]: I1002 11:46:36.886472 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.886436381 podStartE2EDuration="3.886436381s" podCreationTimestamp="2025-10-02 11:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:36.882744665 +0000 UTC m=+1116.790244138" watchObservedRunningTime="2025-10-02 11:46:36.886436381 +0000 UTC m=+1116.793935854" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.008585 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-combined-ca-bundle\") pod \"neutron-db-sync-7jgcd\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.008755 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv5t2\" (UniqueName: \"kubernetes.io/projected/e3e740ae-3b77-4497-b730-6cbd4f960d84-kube-api-access-jv5t2\") pod \"neutron-db-sync-7jgcd\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.009055 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-config\") pod \"neutron-db-sync-7jgcd\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.110762 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-config\") pod \"neutron-db-sync-7jgcd\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.110819 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-combined-ca-bundle\") pod \"neutron-db-sync-7jgcd\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.110880 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv5t2\" (UniqueName: \"kubernetes.io/projected/e3e740ae-3b77-4497-b730-6cbd4f960d84-kube-api-access-jv5t2\") pod \"neutron-db-sync-7jgcd\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.123708 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-combined-ca-bundle\") pod \"neutron-db-sync-7jgcd\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.134522 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv5t2\" (UniqueName: \"kubernetes.io/projected/e3e740ae-3b77-4497-b730-6cbd4f960d84-kube-api-access-jv5t2\") pod \"neutron-db-sync-7jgcd\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.136007 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-config\") pod \"neutron-db-sync-7jgcd\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.207177 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.301536 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ba599e-0338-4e5a-9009-a7a5a4f66747" path="/var/lib/kubelet/pods/61ba599e-0338-4e5a-9009-a7a5a4f66747/volumes" Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.347000 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-56t7s"] Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.724454 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7jgcd"] Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.863924 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59acc19b-4915-4b1c-b3cb-09ff04ccff58","Type":"ContainerStarted","Data":"fd2c86fead6ab5e781e1ca8d31d447730bc68251bb930297796143f89012f556"} Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.863973 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59acc19b-4915-4b1c-b3cb-09ff04ccff58","Type":"ContainerStarted","Data":"2aba8e5cfa7bbd6a67cc13596bba6790a72a2fa36e02ab3f3fb2911c238355e6"} Oct 02 11:46:37 crc kubenswrapper[4725]: I1002 11:46:37.866575 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-56t7s" event={"ID":"804fa613-f386-41a1-975e-835525211cb3","Type":"ContainerStarted","Data":"09387a217209b320b2476fab6e3558416e6541456518f9683ed8e01ab4d57cd5"} Oct 02 11:46:39 crc kubenswrapper[4725]: I1002 11:46:39.891934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7jgcd" event={"ID":"e3e740ae-3b77-4497-b730-6cbd4f960d84","Type":"ContainerStarted","Data":"0f8ecdeeab2fcc5edd7387821a529e45e19e8c55072730c518e342eea03ef91f"} Oct 02 11:46:40 crc kubenswrapper[4725]: I1002 11:46:40.902430 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:40 crc kubenswrapper[4725]: I1002 11:46:40.902915 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:46:40 crc kubenswrapper[4725]: I1002 11:46:40.903020 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7jgcd" event={"ID":"e3e740ae-3b77-4497-b730-6cbd4f960d84","Type":"ContainerStarted","Data":"ecca87986d1d6c49a7d5f56ca6d9c88ab4042ea69ae8bf49fdaeb8ec9a240921"} Oct 02 11:46:40 crc kubenswrapper[4725]: I1002 11:46:40.907525 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc23aa02-f528-4674-adef-8c0793ac184d","Type":"ContainerStarted","Data":"802a6aecc5520acdf3ce8d589f213307ff1e645eec2dd48c8ec0203065f07f77"} Oct 02 11:46:40 crc kubenswrapper[4725]: I1002 11:46:40.909669 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kk9t6" event={"ID":"dff01db3-af7c-4b84-8258-482f19a0a330","Type":"ContainerStarted","Data":"76705b9cfcbbaf736d157836bbe6b5a0e09f173a6acccc2bbc939d729ea5c812"} Oct 02 11:46:40 crc kubenswrapper[4725]: I1002 11:46:40.922768 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-7jgcd" podStartSLOduration=4.922746346 podStartE2EDuration="4.922746346s" podCreationTimestamp="2025-10-02 11:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:40.913762269 +0000 UTC m=+1120.821261732" watchObservedRunningTime="2025-10-02 11:46:40.922746346 +0000 UTC m=+1120.830245809" Oct 02 11:46:40 crc kubenswrapper[4725]: I1002 11:46:40.923010 4725 generic.go:334] "Generic (PLEG): container finished" podID="588ebf73-67b1-4662-b4cf-d51123a49937" containerID="c9a356393159f5d7e114cb81d89851a23264f01e0461805e984aa6fa2426aef7" exitCode=0 Oct 02 11:46:40 crc kubenswrapper[4725]: I1002 11:46:40.923055 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zmk95" event={"ID":"588ebf73-67b1-4662-b4cf-d51123a49937","Type":"ContainerDied","Data":"c9a356393159f5d7e114cb81d89851a23264f01e0461805e984aa6fa2426aef7"} Oct 02 11:46:40 crc kubenswrapper[4725]: I1002 11:46:40.960520 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kk9t6" podStartSLOduration=2.284131563 podStartE2EDuration="9.960494451s" podCreationTimestamp="2025-10-02 11:46:31 +0000 UTC" firstStartedPulling="2025-10-02 11:46:32.808347086 +0000 UTC m=+1112.715846549" lastFinishedPulling="2025-10-02 11:46:40.484709974 +0000 UTC m=+1120.392209437" observedRunningTime="2025-10-02 11:46:40.936340125 +0000 UTC m=+1120.843839588" watchObservedRunningTime="2025-10-02 11:46:40.960494451 +0000 UTC m=+1120.867993914" Oct 02 11:46:40 crc kubenswrapper[4725]: I1002 11:46:40.983167 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:40 crc kubenswrapper[4725]: I1002 11:46:40.984063 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:46:41 crc kubenswrapper[4725]: I1002 11:46:41.954865 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59acc19b-4915-4b1c-b3cb-09ff04ccff58","Type":"ContainerStarted","Data":"c851c67c39f6eeb589ce1d23a005391a2f1d7445dc839747c5aab409cd5d61c2"} Oct 02 11:46:41 crc kubenswrapper[4725]: I1002 11:46:41.990220 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.990197316 podStartE2EDuration="6.990197316s" podCreationTimestamp="2025-10-02 11:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:41.979102273 +0000 UTC m=+1121.886601726" watchObservedRunningTime="2025-10-02 11:46:41.990197316 +0000 UTC m=+1121.897696789" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.342055 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.420861 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-fernet-keys\") pod \"588ebf73-67b1-4662-b4cf-d51123a49937\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.420922 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-config-data\") pod \"588ebf73-67b1-4662-b4cf-d51123a49937\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.420962 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-scripts\") pod \"588ebf73-67b1-4662-b4cf-d51123a49937\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.421041 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz4ls\" (UniqueName: \"kubernetes.io/projected/588ebf73-67b1-4662-b4cf-d51123a49937-kube-api-access-xz4ls\") pod \"588ebf73-67b1-4662-b4cf-d51123a49937\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.421062 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-credential-keys\") pod \"588ebf73-67b1-4662-b4cf-d51123a49937\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.421113 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-combined-ca-bundle\") pod \"588ebf73-67b1-4662-b4cf-d51123a49937\" (UID: \"588ebf73-67b1-4662-b4cf-d51123a49937\") " Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.428373 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588ebf73-67b1-4662-b4cf-d51123a49937-kube-api-access-xz4ls" (OuterVolumeSpecName: "kube-api-access-xz4ls") pod "588ebf73-67b1-4662-b4cf-d51123a49937" (UID: "588ebf73-67b1-4662-b4cf-d51123a49937"). InnerVolumeSpecName "kube-api-access-xz4ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.429744 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "588ebf73-67b1-4662-b4cf-d51123a49937" (UID: "588ebf73-67b1-4662-b4cf-d51123a49937"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.443056 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-scripts" (OuterVolumeSpecName: "scripts") pod "588ebf73-67b1-4662-b4cf-d51123a49937" (UID: "588ebf73-67b1-4662-b4cf-d51123a49937"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.445888 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "588ebf73-67b1-4662-b4cf-d51123a49937" (UID: "588ebf73-67b1-4662-b4cf-d51123a49937"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.451232 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-config-data" (OuterVolumeSpecName: "config-data") pod "588ebf73-67b1-4662-b4cf-d51123a49937" (UID: "588ebf73-67b1-4662-b4cf-d51123a49937"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.458217 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "588ebf73-67b1-4662-b4cf-d51123a49937" (UID: "588ebf73-67b1-4662-b4cf-d51123a49937"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.523638 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.523669 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz4ls\" (UniqueName: \"kubernetes.io/projected/588ebf73-67b1-4662-b4cf-d51123a49937-kube-api-access-xz4ls\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.523679 4725 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.523689 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.523697 4725 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.523705 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588ebf73-67b1-4662-b4cf-d51123a49937-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.965337 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zmk95" event={"ID":"588ebf73-67b1-4662-b4cf-d51123a49937","Type":"ContainerDied","Data":"93b28e7fd91a474db30f7e4af0426ebf5dc076c05a11b206b6529729197cf05c"} Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.965395 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93b28e7fd91a474db30f7e4af0426ebf5dc076c05a11b206b6529729197cf05c" Oct 02 11:46:42 crc kubenswrapper[4725]: I1002 11:46:42.965358 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zmk95" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.041919 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f5bf68656-dnz2c"] Oct 02 11:46:43 crc kubenswrapper[4725]: E1002 11:46:43.042397 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588ebf73-67b1-4662-b4cf-d51123a49937" containerName="keystone-bootstrap" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.042417 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="588ebf73-67b1-4662-b4cf-d51123a49937" containerName="keystone-bootstrap" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.042601 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="588ebf73-67b1-4662-b4cf-d51123a49937" containerName="keystone-bootstrap" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.043887 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.048153 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.048428 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zwkvx" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.048600 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.048735 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.049079 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.049219 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.072715 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f5bf68656-dnz2c"] Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.135152 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-config-data\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.135260 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-scripts\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.135304 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-fernet-keys\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.135322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-credential-keys\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.135338 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-combined-ca-bundle\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.135394 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-internal-tls-certs\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.135571 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvghs\" (UniqueName: \"kubernetes.io/projected/7458e87c-8d2c-4e87-9577-c718b49f9e85-kube-api-access-dvghs\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.135646 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-public-tls-certs\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.237316 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-config-data\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.237410 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-scripts\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.237440 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-fernet-keys\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.237469 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-credential-keys\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.237494 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-combined-ca-bundle\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.237547 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-internal-tls-certs\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.237607 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvghs\" (UniqueName: \"kubernetes.io/projected/7458e87c-8d2c-4e87-9577-c718b49f9e85-kube-api-access-dvghs\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.237642 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-public-tls-certs\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.242993 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-credential-keys\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.243077 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-combined-ca-bundle\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.243433 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-scripts\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.244152 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-config-data\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.244651 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-public-tls-certs\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.249207 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-internal-tls-certs\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.256479 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7458e87c-8d2c-4e87-9577-c718b49f9e85-fernet-keys\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.259614 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvghs\" (UniqueName: \"kubernetes.io/projected/7458e87c-8d2c-4e87-9577-c718b49f9e85-kube-api-access-dvghs\") pod \"keystone-f5bf68656-dnz2c\" (UID: \"7458e87c-8d2c-4e87-9577-c718b49f9e85\") " pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.373948 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.653990 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.654298 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.700017 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.715295 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.725766 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.852416 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f5bf68656-dnz2c"] Oct 02 11:46:43 crc kubenswrapper[4725]: W1002 11:46:43.874857 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7458e87c_8d2c_4e87_9577_c718b49f9e85.slice/crio-7fda74e84e2da4028820d6bfc0339d17ba75dac97b88dbef7e0c2b2f1f953e3b WatchSource:0}: Error finding container 7fda74e84e2da4028820d6bfc0339d17ba75dac97b88dbef7e0c2b2f1f953e3b: Status 404 returned error can't find the container with id 7fda74e84e2da4028820d6bfc0339d17ba75dac97b88dbef7e0c2b2f1f953e3b Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.976820 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5bf68656-dnz2c" event={"ID":"7458e87c-8d2c-4e87-9577-c718b49f9e85","Type":"ContainerStarted","Data":"7fda74e84e2da4028820d6bfc0339d17ba75dac97b88dbef7e0c2b2f1f953e3b"} Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.976871 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 02 11:46:43 crc kubenswrapper[4725]: I1002 11:46:43.976973 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 02 11:46:44 crc kubenswrapper[4725]: I1002 11:46:44.988297 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5bf68656-dnz2c" event={"ID":"7458e87c-8d2c-4e87-9577-c718b49f9e85","Type":"ContainerStarted","Data":"4c1895c540c090776acf476349bc0fbef45b02dd0459c88cdbcebbee52331038"} Oct 02 11:46:45 crc kubenswrapper[4725]: I1002 11:46:45.015372 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f5bf68656-dnz2c" podStartSLOduration=2.015350765 podStartE2EDuration="2.015350765s" podCreationTimestamp="2025-10-02 11:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:46:45.009755027 +0000 UTC m=+1124.917254500" watchObservedRunningTime="2025-10-02 11:46:45.015350765 +0000 UTC m=+1124.922850228" Oct 02 11:46:45 crc kubenswrapper[4725]: I1002 11:46:45.998555 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:46:45 crc kubenswrapper[4725]: I1002 11:46:45.998653 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:46:45 crc kubenswrapper[4725]: I1002 11:46:45.998665 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:46:46 crc kubenswrapper[4725]: I1002 11:46:46.172110 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 02 11:46:46 crc kubenswrapper[4725]: I1002 11:46:46.185027 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:46 crc kubenswrapper[4725]: I1002 11:46:46.185084 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:46 crc kubenswrapper[4725]: I1002 11:46:46.218748 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 02 11:46:46 crc kubenswrapper[4725]: I1002 11:46:46.228326 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:46 crc kubenswrapper[4725]: I1002 11:46:46.234059 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:47 crc kubenswrapper[4725]: I1002 11:46:47.008150 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:47 crc kubenswrapper[4725]: I1002 11:46:47.008994 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:48 crc kubenswrapper[4725]: I1002 11:46:48.016168 4725 generic.go:334] "Generic (PLEG): container finished" podID="dff01db3-af7c-4b84-8258-482f19a0a330" containerID="76705b9cfcbbaf736d157836bbe6b5a0e09f173a6acccc2bbc939d729ea5c812" exitCode=0 Oct 02 11:46:48 crc kubenswrapper[4725]: I1002 11:46:48.016964 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kk9t6" event={"ID":"dff01db3-af7c-4b84-8258-482f19a0a330","Type":"ContainerDied","Data":"76705b9cfcbbaf736d157836bbe6b5a0e09f173a6acccc2bbc939d729ea5c812"} Oct 02 11:46:49 crc kubenswrapper[4725]: I1002 11:46:49.025783 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:46:49 crc kubenswrapper[4725]: I1002 11:46:49.028079 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:46:49 crc kubenswrapper[4725]: I1002 11:46:49.187896 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:49 crc kubenswrapper[4725]: I1002 11:46:49.188520 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 02 11:46:50 crc kubenswrapper[4725]: I1002 11:46:50.905040 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-74988cc86b-c7lcm" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Oct 02 11:46:50 crc kubenswrapper[4725]: I1002 11:46:50.990651 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b797cdcc6-7cf2m" podUID="9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.109547 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kk9t6" event={"ID":"dff01db3-af7c-4b84-8258-482f19a0a330","Type":"ContainerDied","Data":"6dac7d17d8be09301f71ef9b99a03ab63f628c071a38e11c94d008e2e350d2f7"} Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.110079 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dac7d17d8be09301f71ef9b99a03ab63f628c071a38e11c94d008e2e350d2f7" Oct 02 11:46:58 crc kubenswrapper[4725]: E1002 11:46:58.159151 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 02 11:46:58 crc kubenswrapper[4725]: E1002 11:46:58.159340 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwlj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-56t7s_openstack(804fa613-f386-41a1-975e-835525211cb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 11:46:58 crc kubenswrapper[4725]: E1002 11:46:58.160537 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-56t7s" podUID="804fa613-f386-41a1-975e-835525211cb3" Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.219557 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.340504 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-db-sync-config-data\") pod \"dff01db3-af7c-4b84-8258-482f19a0a330\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.340561 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75kt9\" (UniqueName: \"kubernetes.io/projected/dff01db3-af7c-4b84-8258-482f19a0a330-kube-api-access-75kt9\") pod \"dff01db3-af7c-4b84-8258-482f19a0a330\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.340713 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-combined-ca-bundle\") pod \"dff01db3-af7c-4b84-8258-482f19a0a330\" (UID: \"dff01db3-af7c-4b84-8258-482f19a0a330\") " Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.343983 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "dff01db3-af7c-4b84-8258-482f19a0a330" (UID: "dff01db3-af7c-4b84-8258-482f19a0a330"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.345208 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff01db3-af7c-4b84-8258-482f19a0a330-kube-api-access-75kt9" (OuterVolumeSpecName: "kube-api-access-75kt9") pod "dff01db3-af7c-4b84-8258-482f19a0a330" (UID: "dff01db3-af7c-4b84-8258-482f19a0a330"). InnerVolumeSpecName "kube-api-access-75kt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.380565 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dff01db3-af7c-4b84-8258-482f19a0a330" (UID: "dff01db3-af7c-4b84-8258-482f19a0a330"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.443860 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.443887 4725 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dff01db3-af7c-4b84-8258-482f19a0a330-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:58 crc kubenswrapper[4725]: I1002 11:46:58.443900 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75kt9\" (UniqueName: \"kubernetes.io/projected/dff01db3-af7c-4b84-8258-482f19a0a330-kube-api-access-75kt9\") on node \"crc\" DevicePath \"\"" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.118932 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc23aa02-f528-4674-adef-8c0793ac184d","Type":"ContainerStarted","Data":"fe45a01c61ab60b5a87e9ebf45e5b882fd52f916bab0c897cf2b0f31428e9619"} Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.120252 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q2mkt" event={"ID":"050130d8-978e-40e2-9869-ffdbcf50da81","Type":"ContainerStarted","Data":"938638e1841eb22d9d67b42574477b2499b5bafacd9b5e48ffd829af54925e56"} Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.120279 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kk9t6" Oct 02 11:46:59 crc kubenswrapper[4725]: E1002 11:46:59.124186 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-56t7s" podUID="804fa613-f386-41a1-975e-835525211cb3" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.165752 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-q2mkt" podStartSLOduration=2.992188148 podStartE2EDuration="50.165713254s" podCreationTimestamp="2025-10-02 11:46:09 +0000 UTC" firstStartedPulling="2025-10-02 11:46:11.26840819 +0000 UTC m=+1091.175907653" lastFinishedPulling="2025-10-02 11:46:58.441933286 +0000 UTC m=+1138.349432759" observedRunningTime="2025-10-02 11:46:59.145803899 +0000 UTC m=+1139.053303402" watchObservedRunningTime="2025-10-02 11:46:59.165713254 +0000 UTC m=+1139.073212727" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.559491 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-796f86f598-mgjgh"] Oct 02 11:46:59 crc kubenswrapper[4725]: E1002 11:46:59.559978 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff01db3-af7c-4b84-8258-482f19a0a330" containerName="barbican-db-sync" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.559995 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff01db3-af7c-4b84-8258-482f19a0a330" containerName="barbican-db-sync" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.560160 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff01db3-af7c-4b84-8258-482f19a0a330" containerName="barbican-db-sync" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.561016 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.565627 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.565714 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4rskq" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.567861 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.582368 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5798d58dff-jkj6h"] Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.583886 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.586743 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.595202 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-796f86f598-mgjgh"] Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.617014 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5798d58dff-jkj6h"] Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.657899 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-tswl9"] Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.659847 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.671970 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bmd5\" (UniqueName: \"kubernetes.io/projected/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-kube-api-access-2bmd5\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.672062 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-combined-ca-bundle\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.672184 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-logs\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.672270 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-combined-ca-bundle\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.672387 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-config-data\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.672443 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pqnv\" (UniqueName: \"kubernetes.io/projected/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-kube-api-access-8pqnv\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.672507 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-config-data\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.672655 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-config-data-custom\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.672702 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-config-data-custom\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.672757 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-logs\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.674037 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-tswl9"] Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.752461 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f84679f8-gxltw"] Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.754237 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.759412 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.764554 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f84679f8-gxltw"] Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774338 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmn6p\" (UniqueName: \"kubernetes.io/projected/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-kube-api-access-vmn6p\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774386 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-config-data\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774416 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pqnv\" (UniqueName: \"kubernetes.io/projected/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-kube-api-access-8pqnv\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774447 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774466 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-config-data\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774493 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-config-data-custom\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774509 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774531 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-config-data-custom\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774551 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-logs\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774574 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bmd5\" (UniqueName: \"kubernetes.io/projected/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-kube-api-access-2bmd5\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774599 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-combined-ca-bundle\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774619 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774643 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774665 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-config\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774682 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-logs\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.774716 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-combined-ca-bundle\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.779021 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-logs\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.779311 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-logs\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.780767 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-config-data-custom\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.780830 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-config-data\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.781005 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-config-data-custom\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.781059 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-combined-ca-bundle\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.782119 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-config-data\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.790827 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-combined-ca-bundle\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.805772 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bmd5\" (UniqueName: \"kubernetes.io/projected/7a8e9323-4d6b-4015-80bb-5d2752bfd94c-kube-api-access-2bmd5\") pod \"barbican-keystone-listener-796f86f598-mgjgh\" (UID: \"7a8e9323-4d6b-4015-80bb-5d2752bfd94c\") " pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.809547 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pqnv\" (UniqueName: \"kubernetes.io/projected/2ba9160b-539e-40a1-8d2f-4cb0f25e4084-kube-api-access-8pqnv\") pod \"barbican-worker-5798d58dff-jkj6h\" (UID: \"2ba9160b-539e-40a1-8d2f-4cb0f25e4084\") " pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.878098 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.878158 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b2fee18-a983-4641-a5f5-73813a5254cd-logs\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.878677 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.878756 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.878799 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.878863 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.878891 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.878916 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-config\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.878953 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data-custom\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.878978 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-combined-ca-bundle\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.879018 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmn6p\" (UniqueName: \"kubernetes.io/projected/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-kube-api-access-vmn6p\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.879035 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bqh\" (UniqueName: \"kubernetes.io/projected/3b2fee18-a983-4641-a5f5-73813a5254cd-kube-api-access-x7bqh\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.881167 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.882091 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.882795 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-config\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.884690 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.884777 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.895278 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmn6p\" (UniqueName: \"kubernetes.io/projected/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-kube-api-access-vmn6p\") pod \"dnsmasq-dns-59d5ff467f-tswl9\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.906046 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5798d58dff-jkj6h" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.980499 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data-custom\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.980813 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-combined-ca-bundle\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.980859 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7bqh\" (UniqueName: \"kubernetes.io/projected/3b2fee18-a983-4641-a5f5-73813a5254cd-kube-api-access-x7bqh\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.980902 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.980935 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b2fee18-a983-4641-a5f5-73813a5254cd-logs\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.981444 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b2fee18-a983-4641-a5f5-73813a5254cd-logs\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.982551 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.987479 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.990269 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-combined-ca-bundle\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.998430 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data-custom\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:46:59 crc kubenswrapper[4725]: I1002 11:46:59.998838 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7bqh\" (UniqueName: \"kubernetes.io/projected/3b2fee18-a983-4641-a5f5-73813a5254cd-kube-api-access-x7bqh\") pod \"barbican-api-5f84679f8-gxltw\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:47:00 crc kubenswrapper[4725]: I1002 11:47:00.079578 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:47:00 crc kubenswrapper[4725]: I1002 11:47:00.376517 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-796f86f598-mgjgh"] Oct 02 11:47:00 crc kubenswrapper[4725]: W1002 11:47:00.397133 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a8e9323_4d6b_4015_80bb_5d2752bfd94c.slice/crio-e21ab44f18487fb5fe18321f2884e2c950c4c88cd06c08b10b09fdfc22062db5 WatchSource:0}: Error finding container e21ab44f18487fb5fe18321f2884e2c950c4c88cd06c08b10b09fdfc22062db5: Status 404 returned error can't find the container with id e21ab44f18487fb5fe18321f2884e2c950c4c88cd06c08b10b09fdfc22062db5 Oct 02 11:47:00 crc kubenswrapper[4725]: I1002 11:47:00.434761 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5798d58dff-jkj6h"] Oct 02 11:47:00 crc kubenswrapper[4725]: W1002 11:47:00.442894 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ba9160b_539e_40a1_8d2f_4cb0f25e4084.slice/crio-d99a74acda7bb9924563a0db45207ca43e1e39b06bcd68d18dc40c27f0a34aaf WatchSource:0}: Error finding container d99a74acda7bb9924563a0db45207ca43e1e39b06bcd68d18dc40c27f0a34aaf: Status 404 returned error can't find the container with id d99a74acda7bb9924563a0db45207ca43e1e39b06bcd68d18dc40c27f0a34aaf Oct 02 11:47:00 crc kubenswrapper[4725]: I1002 11:47:00.598267 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-tswl9"] Oct 02 11:47:00 crc kubenswrapper[4725]: W1002 11:47:00.607545 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd40540af_42c0_4c62_8bb4_3e8ba5f23f82.slice/crio-7119e9b4a4e4eb1a86559e39bd50738597ba59b42d9814a8890e10e0cffab92c WatchSource:0}: Error finding container 7119e9b4a4e4eb1a86559e39bd50738597ba59b42d9814a8890e10e0cffab92c: Status 404 returned error can't find the container with id 7119e9b4a4e4eb1a86559e39bd50738597ba59b42d9814a8890e10e0cffab92c Oct 02 11:47:00 crc kubenswrapper[4725]: I1002 11:47:00.655967 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f84679f8-gxltw"] Oct 02 11:47:00 crc kubenswrapper[4725]: W1002 11:47:00.663785 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b2fee18_a983_4641_a5f5_73813a5254cd.slice/crio-f6080eb28e46322a301a72aff325eb6294e6be5dd73e9f818cdde475d28cc580 WatchSource:0}: Error finding container f6080eb28e46322a301a72aff325eb6294e6be5dd73e9f818cdde475d28cc580: Status 404 returned error can't find the container with id f6080eb28e46322a301a72aff325eb6294e6be5dd73e9f818cdde475d28cc580 Oct 02 11:47:01 crc kubenswrapper[4725]: I1002 11:47:01.146689 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" event={"ID":"d40540af-42c0-4c62-8bb4-3e8ba5f23f82","Type":"ContainerStarted","Data":"6f55acd9fa6e338d1ed56afb2b77f89d9e6736a615b7d2e499b6f4476a9c0efd"} Oct 02 11:47:01 crc kubenswrapper[4725]: I1002 11:47:01.147019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" event={"ID":"d40540af-42c0-4c62-8bb4-3e8ba5f23f82","Type":"ContainerStarted","Data":"7119e9b4a4e4eb1a86559e39bd50738597ba59b42d9814a8890e10e0cffab92c"} Oct 02 11:47:01 crc kubenswrapper[4725]: I1002 11:47:01.148803 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5798d58dff-jkj6h" event={"ID":"2ba9160b-539e-40a1-8d2f-4cb0f25e4084","Type":"ContainerStarted","Data":"d99a74acda7bb9924563a0db45207ca43e1e39b06bcd68d18dc40c27f0a34aaf"} Oct 02 11:47:01 crc kubenswrapper[4725]: I1002 11:47:01.150603 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" event={"ID":"7a8e9323-4d6b-4015-80bb-5d2752bfd94c","Type":"ContainerStarted","Data":"e21ab44f18487fb5fe18321f2884e2c950c4c88cd06c08b10b09fdfc22062db5"} Oct 02 11:47:01 crc kubenswrapper[4725]: I1002 11:47:01.152570 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f84679f8-gxltw" event={"ID":"3b2fee18-a983-4641-a5f5-73813a5254cd","Type":"ContainerStarted","Data":"e9af8109036f1e29fd2c829dc79e8444b2a3f125cbd0d24ecfff240a73832651"} Oct 02 11:47:01 crc kubenswrapper[4725]: I1002 11:47:01.152609 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f84679f8-gxltw" event={"ID":"3b2fee18-a983-4641-a5f5-73813a5254cd","Type":"ContainerStarted","Data":"f6080eb28e46322a301a72aff325eb6294e6be5dd73e9f818cdde475d28cc580"} Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.161938 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f84679f8-gxltw" event={"ID":"3b2fee18-a983-4641-a5f5-73813a5254cd","Type":"ContainerStarted","Data":"e04383081b51b13ba6052f3572c50f1981dd41bbe2509cf90a16f735e37c8d9d"} Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.162234 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.162269 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.165520 4725 generic.go:334] "Generic (PLEG): container finished" podID="d40540af-42c0-4c62-8bb4-3e8ba5f23f82" containerID="6f55acd9fa6e338d1ed56afb2b77f89d9e6736a615b7d2e499b6f4476a9c0efd" exitCode=0 Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.165558 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" event={"ID":"d40540af-42c0-4c62-8bb4-3e8ba5f23f82","Type":"ContainerDied","Data":"6f55acd9fa6e338d1ed56afb2b77f89d9e6736a615b7d2e499b6f4476a9c0efd"} Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.186933 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f84679f8-gxltw" podStartSLOduration=3.186915869 podStartE2EDuration="3.186915869s" podCreationTimestamp="2025-10-02 11:46:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:47:02.18542219 +0000 UTC m=+1142.092921663" watchObservedRunningTime="2025-10-02 11:47:02.186915869 +0000 UTC m=+1142.094415332" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.364341 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6d64c5b6c4-wjr9t"] Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.366604 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.369340 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.372707 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.392712 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d64c5b6c4-wjr9t"] Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.548996 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-config-data-custom\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.549105 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5wm4\" (UniqueName: \"kubernetes.io/projected/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-kube-api-access-t5wm4\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.549201 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-public-tls-certs\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.549248 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-logs\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.549314 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-config-data\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.549348 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-internal-tls-certs\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.549386 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-combined-ca-bundle\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.650563 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-config-data\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.650619 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-internal-tls-certs\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.650659 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-combined-ca-bundle\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.650705 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-config-data-custom\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.650781 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5wm4\" (UniqueName: \"kubernetes.io/projected/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-kube-api-access-t5wm4\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.650828 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-public-tls-certs\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.650873 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-logs\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.651309 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-logs\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.656388 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-public-tls-certs\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.656555 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-config-data-custom\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.656687 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-config-data\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.657895 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-combined-ca-bundle\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.660998 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-internal-tls-certs\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.669605 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5wm4\" (UniqueName: \"kubernetes.io/projected/d2ef4726-c6b9-4bb3-909d-af176b24f2c8-kube-api-access-t5wm4\") pod \"barbican-api-6d64c5b6c4-wjr9t\" (UID: \"d2ef4726-c6b9-4bb3-909d-af176b24f2c8\") " pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:02 crc kubenswrapper[4725]: I1002 11:47:02.743171 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:03 crc kubenswrapper[4725]: I1002 11:47:03.095011 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:47:03 crc kubenswrapper[4725]: I1002 11:47:03.179061 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" event={"ID":"d40540af-42c0-4c62-8bb4-3e8ba5f23f82","Type":"ContainerStarted","Data":"ea1156d54dfb67bd2541feb6f359520e862fd901db5c7699e81bf4451054bf6d"} Oct 02 11:47:03 crc kubenswrapper[4725]: I1002 11:47:03.222284 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" podStartSLOduration=4.222258224 podStartE2EDuration="4.222258224s" podCreationTimestamp="2025-10-02 11:46:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:47:03.21342276 +0000 UTC m=+1143.120922243" watchObservedRunningTime="2025-10-02 11:47:03.222258224 +0000 UTC m=+1143.129757687" Oct 02 11:47:03 crc kubenswrapper[4725]: I1002 11:47:03.244402 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:47:04 crc kubenswrapper[4725]: I1002 11:47:04.188449 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:47:04 crc kubenswrapper[4725]: I1002 11:47:04.842770 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-b797cdcc6-7cf2m" Oct 02 11:47:04 crc kubenswrapper[4725]: I1002 11:47:04.902585 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74988cc86b-c7lcm"] Oct 02 11:47:04 crc kubenswrapper[4725]: I1002 11:47:04.902871 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74988cc86b-c7lcm" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon-log" containerID="cri-o://8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968" gracePeriod=30 Oct 02 11:47:04 crc kubenswrapper[4725]: I1002 11:47:04.903321 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74988cc86b-c7lcm" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon" containerID="cri-o://1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1" gracePeriod=30 Oct 02 11:47:04 crc kubenswrapper[4725]: I1002 11:47:04.909356 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74988cc86b-c7lcm" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Oct 02 11:47:05 crc kubenswrapper[4725]: I1002 11:47:05.198762 4725 generic.go:334] "Generic (PLEG): container finished" podID="82823ab6-7a43-44da-8ad3-33edd043d777" containerID="f381abffb8ca468621947fdc830922a27cebd5b407ea42239b2e0faebdc4f930" exitCode=137 Oct 02 11:47:05 crc kubenswrapper[4725]: I1002 11:47:05.198800 4725 generic.go:334] "Generic (PLEG): container finished" podID="82823ab6-7a43-44da-8ad3-33edd043d777" containerID="a54d08ff4322dbde0471d94a6766cee77c5d56c5dc42402c5afe77a28c36cec6" exitCode=137 Oct 02 11:47:05 crc kubenswrapper[4725]: I1002 11:47:05.198848 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7595c47df5-46cnx" event={"ID":"82823ab6-7a43-44da-8ad3-33edd043d777","Type":"ContainerDied","Data":"f381abffb8ca468621947fdc830922a27cebd5b407ea42239b2e0faebdc4f930"} Oct 02 11:47:05 crc kubenswrapper[4725]: I1002 11:47:05.198904 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7595c47df5-46cnx" event={"ID":"82823ab6-7a43-44da-8ad3-33edd043d777","Type":"ContainerDied","Data":"a54d08ff4322dbde0471d94a6766cee77c5d56c5dc42402c5afe77a28c36cec6"} Oct 02 11:47:08 crc kubenswrapper[4725]: I1002 11:47:08.294957 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74988cc86b-c7lcm" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:52200->10.217.0.146:8443: read: connection reset by peer" Oct 02 11:47:09 crc kubenswrapper[4725]: I1002 11:47:09.352957 4725 generic.go:334] "Generic (PLEG): container finished" podID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerID="1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1" exitCode=0 Oct 02 11:47:09 crc kubenswrapper[4725]: I1002 11:47:09.353005 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74988cc86b-c7lcm" event={"ID":"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4","Type":"ContainerDied","Data":"1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1"} Oct 02 11:47:09 crc kubenswrapper[4725]: I1002 11:47:09.985026 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:47:10 crc kubenswrapper[4725]: I1002 11:47:10.046017 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rhqpv"] Oct 02 11:47:10 crc kubenswrapper[4725]: I1002 11:47:10.046288 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" podUID="02914303-44fa-48fc-842e-d0876f44e300" containerName="dnsmasq-dns" containerID="cri-o://6916d612f2ae5c2678c41664a2a0420b0467c7528c94a7f744a1c4edd24af1c3" gracePeriod=10 Oct 02 11:47:10 crc kubenswrapper[4725]: I1002 11:47:10.352906 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" podUID="02914303-44fa-48fc-842e-d0876f44e300" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Oct 02 11:47:10 crc kubenswrapper[4725]: I1002 11:47:10.903057 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74988cc86b-c7lcm" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Oct 02 11:47:11 crc kubenswrapper[4725]: I1002 11:47:11.122026 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5f84679f8-gxltw" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 02 11:47:11 crc kubenswrapper[4725]: I1002 11:47:11.122048 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5f84679f8-gxltw" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 02 11:47:12 crc kubenswrapper[4725]: I1002 11:47:12.381455 4725 generic.go:334] "Generic (PLEG): container finished" podID="02914303-44fa-48fc-842e-d0876f44e300" containerID="6916d612f2ae5c2678c41664a2a0420b0467c7528c94a7f744a1c4edd24af1c3" exitCode=0 Oct 02 11:47:12 crc kubenswrapper[4725]: I1002 11:47:12.381543 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" event={"ID":"02914303-44fa-48fc-842e-d0876f44e300","Type":"ContainerDied","Data":"6916d612f2ae5c2678c41664a2a0420b0467c7528c94a7f744a1c4edd24af1c3"} Oct 02 11:47:12 crc kubenswrapper[4725]: I1002 11:47:12.525159 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f84679f8-gxltw" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:47:12 crc kubenswrapper[4725]: I1002 11:47:12.531582 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f84679f8-gxltw" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 11:47:15 crc kubenswrapper[4725]: I1002 11:47:15.088266 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:47:15 crc kubenswrapper[4725]: I1002 11:47:15.360441 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" podUID="02914303-44fa-48fc-842e-d0876f44e300" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Oct 02 11:47:16 crc kubenswrapper[4725]: I1002 11:47:16.213827 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-f5bf68656-dnz2c" Oct 02 11:47:18 crc kubenswrapper[4725]: I1002 11:47:18.171512 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.124946 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.126331 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.128323 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.128343 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.128521 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jvzkl" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.231320 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a694a92f-563d-41d0-908e-744aec98dd01-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.231378 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a694a92f-563d-41d0-908e-744aec98dd01-openstack-config-secret\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.231483 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vshh\" (UniqueName: \"kubernetes.io/projected/a694a92f-563d-41d0-908e-744aec98dd01-kube-api-access-9vshh\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.231729 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a694a92f-563d-41d0-908e-744aec98dd01-openstack-config\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.233359 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.333362 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a694a92f-563d-41d0-908e-744aec98dd01-openstack-config\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.333530 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a694a92f-563d-41d0-908e-744aec98dd01-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.333563 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a694a92f-563d-41d0-908e-744aec98dd01-openstack-config-secret\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.333590 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vshh\" (UniqueName: \"kubernetes.io/projected/a694a92f-563d-41d0-908e-744aec98dd01-kube-api-access-9vshh\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.334256 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a694a92f-563d-41d0-908e-744aec98dd01-openstack-config\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.342207 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a694a92f-563d-41d0-908e-744aec98dd01-openstack-config-secret\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.346363 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a694a92f-563d-41d0-908e-744aec98dd01-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.351602 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vshh\" (UniqueName: \"kubernetes.io/projected/a694a92f-563d-41d0-908e-744aec98dd01-kube-api-access-9vshh\") pod \"openstackclient\" (UID: \"a694a92f-563d-41d0-908e-744aec98dd01\") " pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.353407 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" podUID="02914303-44fa-48fc-842e-d0876f44e300" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.353565 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.544509 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 02 11:47:20 crc kubenswrapper[4725]: I1002 11:47:20.903553 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74988cc86b-c7lcm" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Oct 02 11:47:25 crc kubenswrapper[4725]: I1002 11:47:25.352538 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" podUID="02914303-44fa-48fc-842e-d0876f44e300" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Oct 02 11:47:30 crc kubenswrapper[4725]: I1002 11:47:30.352986 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" podUID="02914303-44fa-48fc-842e-d0876f44e300" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Oct 02 11:47:30 crc kubenswrapper[4725]: I1002 11:47:30.903214 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74988cc86b-c7lcm" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Oct 02 11:47:31 crc kubenswrapper[4725]: E1002 11:47:31.292860 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified" Oct 02 11:47:31 crc kubenswrapper[4725]: E1002 11:47:31.293010 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-worker-log,Image:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,Command:[/usr/bin/dumb-init],Args:[--single-child -- /usr/bin/tail -n+1 -F /var/log/barbican/barbican-worker.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7dh59dh66fh694hbch549h558h5cfh67ch699hdfhf4h6bh7dh657h65h678h578h564h96hf6h5c4hfchffh555h579h68dh685h59bh599h558h68bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/barbican,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pqnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-worker-5798d58dff-jkj6h_openstack(2ba9160b-539e-40a1-8d2f-4cb0f25e4084): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 11:47:31 crc kubenswrapper[4725]: E1002 11:47:31.296025 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-worker-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"barbican-worker\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified\\\"\"]" pod="openstack/barbican-worker-5798d58dff-jkj6h" podUID="2ba9160b-539e-40a1-8d2f-4cb0f25e4084" Oct 02 11:47:31 crc kubenswrapper[4725]: E1002 11:47:31.660973 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-worker-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified\\\"\", failed to \"StartContainer\" for \"barbican-worker\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified\\\"\"]" pod="openstack/barbican-worker-5798d58dff-jkj6h" podUID="2ba9160b-539e-40a1-8d2f-4cb0f25e4084" Oct 02 11:47:32 crc kubenswrapper[4725]: E1002 11:47:32.377922 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Oct 02 11:47:32 crc kubenswrapper[4725]: E1002 11:47:32.378126 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xtfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(bc23aa02-f528-4674-adef-8c0793ac184d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 11:47:32 crc kubenswrapper[4725]: E1002 11:47:32.380737 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.493194 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.495377 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.576627 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" event={"ID":"02914303-44fa-48fc-842e-d0876f44e300","Type":"ContainerDied","Data":"ce9cd3187747401f75ff3090f436c64d1eebf97a57b74760821bd18d252b83f0"} Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.576658 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rhqpv" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.576685 4725 scope.go:117] "RemoveContainer" containerID="6916d612f2ae5c2678c41664a2a0420b0467c7528c94a7f744a1c4edd24af1c3" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.586827 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7595c47df5-46cnx" event={"ID":"82823ab6-7a43-44da-8ad3-33edd043d777","Type":"ContainerDied","Data":"2d504520f3333cc4cbb13347147056d5fa0ac157d12979150d513dc03a61ab59"} Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.586845 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7595c47df5-46cnx" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.586900 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="ceilometer-central-agent" containerID="cri-o://74e3157b02fa4dce33ec3c3a0a9a34c77e14cfe774d1ae4275c74df1b52a1d78" gracePeriod=30 Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.587012 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="ceilometer-notification-agent" containerID="cri-o://802a6aecc5520acdf3ce8d589f213307ff1e645eec2dd48c8ec0203065f07f77" gracePeriod=30 Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.587000 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="sg-core" containerID="cri-o://fe45a01c61ab60b5a87e9ebf45e5b882fd52f916bab0c897cf2b0f31428e9619" gracePeriod=30 Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.606672 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82823ab6-7a43-44da-8ad3-33edd043d777-logs\") pod \"82823ab6-7a43-44da-8ad3-33edd043d777\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.606741 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-scripts\") pod \"82823ab6-7a43-44da-8ad3-33edd043d777\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.606778 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-config\") pod \"02914303-44fa-48fc-842e-d0876f44e300\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.606849 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/82823ab6-7a43-44da-8ad3-33edd043d777-horizon-secret-key\") pod \"82823ab6-7a43-44da-8ad3-33edd043d777\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.606885 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-svc\") pod \"02914303-44fa-48fc-842e-d0876f44e300\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.606912 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-nb\") pod \"02914303-44fa-48fc-842e-d0876f44e300\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.606974 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-sb\") pod \"02914303-44fa-48fc-842e-d0876f44e300\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.607009 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbj6q\" (UniqueName: \"kubernetes.io/projected/02914303-44fa-48fc-842e-d0876f44e300-kube-api-access-kbj6q\") pod \"02914303-44fa-48fc-842e-d0876f44e300\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.607051 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4slt\" (UniqueName: \"kubernetes.io/projected/82823ab6-7a43-44da-8ad3-33edd043d777-kube-api-access-g4slt\") pod \"82823ab6-7a43-44da-8ad3-33edd043d777\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.607112 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-swift-storage-0\") pod \"02914303-44fa-48fc-842e-d0876f44e300\" (UID: \"02914303-44fa-48fc-842e-d0876f44e300\") " Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.607169 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-config-data\") pod \"82823ab6-7a43-44da-8ad3-33edd043d777\" (UID: \"82823ab6-7a43-44da-8ad3-33edd043d777\") " Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.611231 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82823ab6-7a43-44da-8ad3-33edd043d777-logs" (OuterVolumeSpecName: "logs") pod "82823ab6-7a43-44da-8ad3-33edd043d777" (UID: "82823ab6-7a43-44da-8ad3-33edd043d777"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.635555 4725 scope.go:117] "RemoveContainer" containerID="1e6fedd1f9f7de0914ad38ef6718da90f135adde5a4024ecab8e3d9c6dd5735b" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.643068 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82823ab6-7a43-44da-8ad3-33edd043d777-kube-api-access-g4slt" (OuterVolumeSpecName: "kube-api-access-g4slt") pod "82823ab6-7a43-44da-8ad3-33edd043d777" (UID: "82823ab6-7a43-44da-8ad3-33edd043d777"). InnerVolumeSpecName "kube-api-access-g4slt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.660195 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82823ab6-7a43-44da-8ad3-33edd043d777-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "82823ab6-7a43-44da-8ad3-33edd043d777" (UID: "82823ab6-7a43-44da-8ad3-33edd043d777"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.663506 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02914303-44fa-48fc-842e-d0876f44e300-kube-api-access-kbj6q" (OuterVolumeSpecName: "kube-api-access-kbj6q") pod "02914303-44fa-48fc-842e-d0876f44e300" (UID: "02914303-44fa-48fc-842e-d0876f44e300"). InnerVolumeSpecName "kube-api-access-kbj6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.670034 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-config-data" (OuterVolumeSpecName: "config-data") pod "82823ab6-7a43-44da-8ad3-33edd043d777" (UID: "82823ab6-7a43-44da-8ad3-33edd043d777"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.699484 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "02914303-44fa-48fc-842e-d0876f44e300" (UID: "02914303-44fa-48fc-842e-d0876f44e300"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.711441 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.711483 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82823ab6-7a43-44da-8ad3-33edd043d777-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.711495 4725 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/82823ab6-7a43-44da-8ad3-33edd043d777-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.711507 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.711518 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbj6q\" (UniqueName: \"kubernetes.io/projected/02914303-44fa-48fc-842e-d0876f44e300-kube-api-access-kbj6q\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.711530 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4slt\" (UniqueName: \"kubernetes.io/projected/82823ab6-7a43-44da-8ad3-33edd043d777-kube-api-access-g4slt\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.718175 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "02914303-44fa-48fc-842e-d0876f44e300" (UID: "02914303-44fa-48fc-842e-d0876f44e300"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.721656 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-config" (OuterVolumeSpecName: "config") pod "02914303-44fa-48fc-842e-d0876f44e300" (UID: "02914303-44fa-48fc-842e-d0876f44e300"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.729067 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "02914303-44fa-48fc-842e-d0876f44e300" (UID: "02914303-44fa-48fc-842e-d0876f44e300"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.735459 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02914303-44fa-48fc-842e-d0876f44e300" (UID: "02914303-44fa-48fc-842e-d0876f44e300"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.755928 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-scripts" (OuterVolumeSpecName: "scripts") pod "82823ab6-7a43-44da-8ad3-33edd043d777" (UID: "82823ab6-7a43-44da-8ad3-33edd043d777"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.813045 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82823ab6-7a43-44da-8ad3-33edd043d777-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.813078 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.813088 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.813096 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.813109 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02914303-44fa-48fc-842e-d0876f44e300-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.832509 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d64c5b6c4-wjr9t"] Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.913016 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rhqpv"] Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.922154 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rhqpv"] Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.932695 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7595c47df5-46cnx"] Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.941527 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 02 11:47:32 crc kubenswrapper[4725]: I1002 11:47:32.949550 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7595c47df5-46cnx"] Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.053084 4725 scope.go:117] "RemoveContainer" containerID="f381abffb8ca468621947fdc830922a27cebd5b407ea42239b2e0faebdc4f930" Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.238251 4725 scope.go:117] "RemoveContainer" containerID="a54d08ff4322dbde0471d94a6766cee77c5d56c5dc42402c5afe77a28c36cec6" Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.280432 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02914303-44fa-48fc-842e-d0876f44e300" path="/var/lib/kubelet/pods/02914303-44fa-48fc-842e-d0876f44e300/volumes" Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.281422 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82823ab6-7a43-44da-8ad3-33edd043d777" path="/var/lib/kubelet/pods/82823ab6-7a43-44da-8ad3-33edd043d777/volumes" Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.597376 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" event={"ID":"7a8e9323-4d6b-4015-80bb-5d2752bfd94c","Type":"ContainerStarted","Data":"eaf6667050e43a0c0d360176605cdb6a3e9efd53bf443d1224b568b6d69fdc7f"} Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.597422 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" event={"ID":"7a8e9323-4d6b-4015-80bb-5d2752bfd94c","Type":"ContainerStarted","Data":"381bfd6fa015b77742021365b847ee1f8054f26a61902a01b53ac65b809b0903"} Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.599089 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d64c5b6c4-wjr9t" event={"ID":"d2ef4726-c6b9-4bb3-909d-af176b24f2c8","Type":"ContainerStarted","Data":"129e8aefffb4036115f698c106038f068fad90052fa41504a5ba934c0e3d357f"} Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.599117 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d64c5b6c4-wjr9t" event={"ID":"d2ef4726-c6b9-4bb3-909d-af176b24f2c8","Type":"ContainerStarted","Data":"e3da56ae73f0247573da307c9c259a690e1976c645d0ae704e6938c7e841c30c"} Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.601348 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a694a92f-563d-41d0-908e-744aec98dd01","Type":"ContainerStarted","Data":"978f052deb6a90e3304bbc3a634588a82840fc6074aa876b37a30e5b7973f183"} Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.604067 4725 generic.go:334] "Generic (PLEG): container finished" podID="bc23aa02-f528-4674-adef-8c0793ac184d" containerID="fe45a01c61ab60b5a87e9ebf45e5b882fd52f916bab0c897cf2b0f31428e9619" exitCode=2 Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.604089 4725 generic.go:334] "Generic (PLEG): container finished" podID="bc23aa02-f528-4674-adef-8c0793ac184d" containerID="74e3157b02fa4dce33ec3c3a0a9a34c77e14cfe774d1ae4275c74df1b52a1d78" exitCode=0 Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.604157 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc23aa02-f528-4674-adef-8c0793ac184d","Type":"ContainerDied","Data":"fe45a01c61ab60b5a87e9ebf45e5b882fd52f916bab0c897cf2b0f31428e9619"} Oct 02 11:47:33 crc kubenswrapper[4725]: I1002 11:47:33.604210 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc23aa02-f528-4674-adef-8c0793ac184d","Type":"ContainerDied","Data":"74e3157b02fa4dce33ec3c3a0a9a34c77e14cfe774d1ae4275c74df1b52a1d78"} Oct 02 11:47:34 crc kubenswrapper[4725]: I1002 11:47:34.619290 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-56t7s" event={"ID":"804fa613-f386-41a1-975e-835525211cb3","Type":"ContainerStarted","Data":"b7e9b1ee574a8b19eb2a80b250dab80ed08ca9f07c9e584acf0e8fc05d53ed65"} Oct 02 11:47:34 crc kubenswrapper[4725]: I1002 11:47:34.624399 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d64c5b6c4-wjr9t" event={"ID":"d2ef4726-c6b9-4bb3-909d-af176b24f2c8","Type":"ContainerStarted","Data":"b2c40c8f24c4ce3054258caa4b67f47afd6147b5a284fac84a45ce1be2842d06"} Oct 02 11:47:34 crc kubenswrapper[4725]: I1002 11:47:34.624933 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:34 crc kubenswrapper[4725]: I1002 11:47:34.625047 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:34 crc kubenswrapper[4725]: I1002 11:47:34.639899 4725 generic.go:334] "Generic (PLEG): container finished" podID="bc23aa02-f528-4674-adef-8c0793ac184d" containerID="802a6aecc5520acdf3ce8d589f213307ff1e645eec2dd48c8ec0203065f07f77" exitCode=0 Oct 02 11:47:34 crc kubenswrapper[4725]: I1002 11:47:34.639991 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc23aa02-f528-4674-adef-8c0793ac184d","Type":"ContainerDied","Data":"802a6aecc5520acdf3ce8d589f213307ff1e645eec2dd48c8ec0203065f07f77"} Oct 02 11:47:34 crc kubenswrapper[4725]: I1002 11:47:34.643954 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-56t7s" podStartSLOduration=2.8860690399999998 podStartE2EDuration="58.643937286s" podCreationTimestamp="2025-10-02 11:46:36 +0000 UTC" firstStartedPulling="2025-10-02 11:46:37.316366579 +0000 UTC m=+1117.223866042" lastFinishedPulling="2025-10-02 11:47:33.074234825 +0000 UTC m=+1172.981734288" observedRunningTime="2025-10-02 11:47:34.637002853 +0000 UTC m=+1174.544502336" watchObservedRunningTime="2025-10-02 11:47:34.643937286 +0000 UTC m=+1174.551436749" Oct 02 11:47:34 crc kubenswrapper[4725]: I1002 11:47:34.657314 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6d64c5b6c4-wjr9t" podStartSLOduration=32.657290019 podStartE2EDuration="32.657290019s" podCreationTimestamp="2025-10-02 11:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:47:34.653875978 +0000 UTC m=+1174.561375451" watchObservedRunningTime="2025-10-02 11:47:34.657290019 +0000 UTC m=+1174.564789482" Oct 02 11:47:34 crc kubenswrapper[4725]: I1002 11:47:34.676909 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-796f86f598-mgjgh" podStartSLOduration=3.760774215 podStartE2EDuration="35.676884436s" podCreationTimestamp="2025-10-02 11:46:59 +0000 UTC" firstStartedPulling="2025-10-02 11:47:00.398902995 +0000 UTC m=+1140.306402458" lastFinishedPulling="2025-10-02 11:47:32.315013216 +0000 UTC m=+1172.222512679" observedRunningTime="2025-10-02 11:47:34.671181396 +0000 UTC m=+1174.578680859" watchObservedRunningTime="2025-10-02 11:47:34.676884436 +0000 UTC m=+1174.584383899" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.123629 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.310996 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-76bb8577f-p858j"] Oct 02 11:47:35 crc kubenswrapper[4725]: E1002 11:47:35.311658 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02914303-44fa-48fc-842e-d0876f44e300" containerName="init" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.311772 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="02914303-44fa-48fc-842e-d0876f44e300" containerName="init" Oct 02 11:47:35 crc kubenswrapper[4725]: E1002 11:47:35.311851 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02914303-44fa-48fc-842e-d0876f44e300" containerName="dnsmasq-dns" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.311931 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="02914303-44fa-48fc-842e-d0876f44e300" containerName="dnsmasq-dns" Oct 02 11:47:35 crc kubenswrapper[4725]: E1002 11:47:35.312008 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="sg-core" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.312076 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="sg-core" Oct 02 11:47:35 crc kubenswrapper[4725]: E1002 11:47:35.312150 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="ceilometer-central-agent" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.312211 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="ceilometer-central-agent" Oct 02 11:47:35 crc kubenswrapper[4725]: E1002 11:47:35.312279 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82823ab6-7a43-44da-8ad3-33edd043d777" containerName="horizon" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.312352 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="82823ab6-7a43-44da-8ad3-33edd043d777" containerName="horizon" Oct 02 11:47:35 crc kubenswrapper[4725]: E1002 11:47:35.312428 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82823ab6-7a43-44da-8ad3-33edd043d777" containerName="horizon-log" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.312500 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="82823ab6-7a43-44da-8ad3-33edd043d777" containerName="horizon-log" Oct 02 11:47:35 crc kubenswrapper[4725]: E1002 11:47:35.312574 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="ceilometer-notification-agent" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.312644 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="ceilometer-notification-agent" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.312961 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="82823ab6-7a43-44da-8ad3-33edd043d777" containerName="horizon-log" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.313070 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="02914303-44fa-48fc-842e-d0876f44e300" containerName="dnsmasq-dns" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.313171 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="82823ab6-7a43-44da-8ad3-33edd043d777" containerName="horizon" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.313243 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="ceilometer-central-agent" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.313322 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="ceilometer-notification-agent" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.313403 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" containerName="sg-core" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.314633 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.312283 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-log-httpd\") pod \"bc23aa02-f528-4674-adef-8c0793ac184d\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.314933 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xtfv\" (UniqueName: \"kubernetes.io/projected/bc23aa02-f528-4674-adef-8c0793ac184d-kube-api-access-7xtfv\") pod \"bc23aa02-f528-4674-adef-8c0793ac184d\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.314993 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-sg-core-conf-yaml\") pod \"bc23aa02-f528-4674-adef-8c0793ac184d\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.315023 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-combined-ca-bundle\") pod \"bc23aa02-f528-4674-adef-8c0793ac184d\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.315055 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-run-httpd\") pod \"bc23aa02-f528-4674-adef-8c0793ac184d\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.315103 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-scripts\") pod \"bc23aa02-f528-4674-adef-8c0793ac184d\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.315135 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-config-data\") pod \"bc23aa02-f528-4674-adef-8c0793ac184d\" (UID: \"bc23aa02-f528-4674-adef-8c0793ac184d\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.312643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bc23aa02-f528-4674-adef-8c0793ac184d" (UID: "bc23aa02-f528-4674-adef-8c0793ac184d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.315925 4725 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.327334 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.328311 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.328439 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.329273 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76bb8577f-p858j"] Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.330864 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc23aa02-f528-4674-adef-8c0793ac184d-kube-api-access-7xtfv" (OuterVolumeSpecName: "kube-api-access-7xtfv") pod "bc23aa02-f528-4674-adef-8c0793ac184d" (UID: "bc23aa02-f528-4674-adef-8c0793ac184d"). InnerVolumeSpecName "kube-api-access-7xtfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.332416 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bc23aa02-f528-4674-adef-8c0793ac184d" (UID: "bc23aa02-f528-4674-adef-8c0793ac184d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.361936 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-scripts" (OuterVolumeSpecName: "scripts") pod "bc23aa02-f528-4674-adef-8c0793ac184d" (UID: "bc23aa02-f528-4674-adef-8c0793ac184d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.407146 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bc23aa02-f528-4674-adef-8c0793ac184d" (UID: "bc23aa02-f528-4674-adef-8c0793ac184d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.423091 4725 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.423134 4725 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc23aa02-f528-4674-adef-8c0793ac184d-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.423147 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.423159 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xtfv\" (UniqueName: \"kubernetes.io/projected/bc23aa02-f528-4674-adef-8c0793ac184d-kube-api-access-7xtfv\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.435507 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-config-data" (OuterVolumeSpecName: "config-data") pod "bc23aa02-f528-4674-adef-8c0793ac184d" (UID: "bc23aa02-f528-4674-adef-8c0793ac184d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.478922 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc23aa02-f528-4674-adef-8c0793ac184d" (UID: "bc23aa02-f528-4674-adef-8c0793ac184d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.504140 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.525252 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92f1433d-ba22-410b-b18f-b048e5ac47a7-log-httpd\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.525338 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92f1433d-ba22-410b-b18f-b048e5ac47a7-etc-swift\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.525372 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzbsp\" (UniqueName: \"kubernetes.io/projected/92f1433d-ba22-410b-b18f-b048e5ac47a7-kube-api-access-fzbsp\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.525399 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-internal-tls-certs\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.525421 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-public-tls-certs\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.525447 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-combined-ca-bundle\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.525471 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-config-data\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.525505 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92f1433d-ba22-410b-b18f-b048e5ac47a7-run-httpd\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.525622 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.525638 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc23aa02-f528-4674-adef-8c0793ac184d-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627184 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-logs\") pod \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627259 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-secret-key\") pod \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627290 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-scripts\") pod \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627323 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-combined-ca-bundle\") pod \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627415 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-config-data\") pod \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627546 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9k2m\" (UniqueName: \"kubernetes.io/projected/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-kube-api-access-h9k2m\") pod \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627569 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-tls-certs\") pod \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\" (UID: \"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4\") " Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627783 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92f1433d-ba22-410b-b18f-b048e5ac47a7-etc-swift\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627808 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzbsp\" (UniqueName: \"kubernetes.io/projected/92f1433d-ba22-410b-b18f-b048e5ac47a7-kube-api-access-fzbsp\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627824 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-internal-tls-certs\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627842 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-public-tls-certs\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627863 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-combined-ca-bundle\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627889 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-config-data\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.627920 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92f1433d-ba22-410b-b18f-b048e5ac47a7-run-httpd\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.628074 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92f1433d-ba22-410b-b18f-b048e5ac47a7-log-httpd\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.628511 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92f1433d-ba22-410b-b18f-b048e5ac47a7-log-httpd\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.629327 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-logs" (OuterVolumeSpecName: "logs") pod "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" (UID: "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.629344 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92f1433d-ba22-410b-b18f-b048e5ac47a7-run-httpd\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.633061 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" (UID: "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.633422 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-kube-api-access-h9k2m" (OuterVolumeSpecName: "kube-api-access-h9k2m") pod "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" (UID: "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4"). InnerVolumeSpecName "kube-api-access-h9k2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.634842 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-combined-ca-bundle\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.634987 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92f1433d-ba22-410b-b18f-b048e5ac47a7-etc-swift\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.636055 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-internal-tls-certs\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.638118 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-config-data\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.640713 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f1433d-ba22-410b-b18f-b048e5ac47a7-public-tls-certs\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.657763 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzbsp\" (UniqueName: \"kubernetes.io/projected/92f1433d-ba22-410b-b18f-b048e5ac47a7-kube-api-access-fzbsp\") pod \"swift-proxy-76bb8577f-p858j\" (UID: \"92f1433d-ba22-410b-b18f-b048e5ac47a7\") " pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.664535 4725 generic.go:334] "Generic (PLEG): container finished" podID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerID="8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968" exitCode=137 Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.664615 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74988cc86b-c7lcm" event={"ID":"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4","Type":"ContainerDied","Data":"8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968"} Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.664683 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74988cc86b-c7lcm" event={"ID":"fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4","Type":"ContainerDied","Data":"59bb826973d6608d05797c5c7294ef297e577edffb19819a01cafe449f38cb68"} Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.664705 4725 scope.go:117] "RemoveContainer" containerID="1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.664932 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74988cc86b-c7lcm" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.667794 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-config-data" (OuterVolumeSpecName: "config-data") pod "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" (UID: "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.673824 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc23aa02-f528-4674-adef-8c0793ac184d","Type":"ContainerDied","Data":"6c47dee1c7c2dbddf59800472a18a20edf3732c515932bd0b5a38fc69d9dcfb3"} Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.674095 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.686710 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-scripts" (OuterVolumeSpecName: "scripts") pod "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" (UID: "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.690821 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" (UID: "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.721689 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" (UID: "fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.736394 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9k2m\" (UniqueName: \"kubernetes.io/projected/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-kube-api-access-h9k2m\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.736441 4725 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.736455 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.736472 4725 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.736487 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.736503 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.736519 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.766692 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.782156 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.793801 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.809323 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:35 crc kubenswrapper[4725]: E1002 11:47:35.809845 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon-log" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.809878 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon-log" Oct 02 11:47:35 crc kubenswrapper[4725]: E1002 11:47:35.809920 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.809929 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.810137 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.810161 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" containerName="horizon-log" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.812314 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.814792 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.815059 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.819425 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.906514 4725 scope.go:117] "RemoveContainer" containerID="8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.941373 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.941448 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-scripts\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.941477 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-config-data\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.941612 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-log-httpd\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.941748 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dprvn\" (UniqueName: \"kubernetes.io/projected/5327ca9d-2534-452b-8d42-266f5244c6a6-kube-api-access-dprvn\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.941885 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-run-httpd\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.941963 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.942550 4725 scope.go:117] "RemoveContainer" containerID="1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1" Oct 02 11:47:35 crc kubenswrapper[4725]: E1002 11:47:35.943324 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1\": container with ID starting with 1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1 not found: ID does not exist" containerID="1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.943380 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1"} err="failed to get container status \"1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1\": rpc error: code = NotFound desc = could not find container \"1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1\": container with ID starting with 1b130b61af0823ed64668e28ef69b09259ec737419a5f1545acdbabdc3b373c1 not found: ID does not exist" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.943413 4725 scope.go:117] "RemoveContainer" containerID="8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968" Oct 02 11:47:35 crc kubenswrapper[4725]: E1002 11:47:35.944521 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968\": container with ID starting with 8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968 not found: ID does not exist" containerID="8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.944568 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968"} err="failed to get container status \"8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968\": rpc error: code = NotFound desc = could not find container \"8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968\": container with ID starting with 8a372aa76915b778390dac1e5098bdd1f25122d9267a7354b8005e51a336e968 not found: ID does not exist" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.944597 4725 scope.go:117] "RemoveContainer" containerID="fe45a01c61ab60b5a87e9ebf45e5b882fd52f916bab0c897cf2b0f31428e9619" Oct 02 11:47:35 crc kubenswrapper[4725]: I1002 11:47:35.978420 4725 scope.go:117] "RemoveContainer" containerID="802a6aecc5520acdf3ce8d589f213307ff1e645eec2dd48c8ec0203065f07f77" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.011901 4725 scope.go:117] "RemoveContainer" containerID="74e3157b02fa4dce33ec3c3a0a9a34c77e14cfe774d1ae4275c74df1b52a1d78" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.012102 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74988cc86b-c7lcm"] Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.057631 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.057670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-scripts\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.057691 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-config-data\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.057710 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-log-httpd\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.057755 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dprvn\" (UniqueName: \"kubernetes.io/projected/5327ca9d-2534-452b-8d42-266f5244c6a6-kube-api-access-dprvn\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.057787 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-run-httpd\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.057820 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.063947 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-log-httpd\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.064773 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.066336 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-run-httpd\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.067208 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.067840 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-scripts\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.068820 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-config-data\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.078902 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dprvn\" (UniqueName: \"kubernetes.io/projected/5327ca9d-2534-452b-8d42-266f5244c6a6-kube-api-access-dprvn\") pod \"ceilometer-0\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.081601 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-74988cc86b-c7lcm"] Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.132508 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.707854 4725 generic.go:334] "Generic (PLEG): container finished" podID="050130d8-978e-40e2-9869-ffdbcf50da81" containerID="938638e1841eb22d9d67b42574477b2499b5bafacd9b5e48ffd829af54925e56" exitCode=0 Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.707865 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q2mkt" event={"ID":"050130d8-978e-40e2-9869-ffdbcf50da81","Type":"ContainerDied","Data":"938638e1841eb22d9d67b42574477b2499b5bafacd9b5e48ffd829af54925e56"} Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.718232 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:36 crc kubenswrapper[4725]: I1002 11:47:36.934146 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76bb8577f-p858j"] Oct 02 11:47:36 crc kubenswrapper[4725]: W1002 11:47:36.935664 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92f1433d_ba22_410b_b18f_b048e5ac47a7.slice/crio-c1f8064d7582a1f01cc2f6c71e9994ce35fd471783b6d4746b456082ea3d6f23 WatchSource:0}: Error finding container c1f8064d7582a1f01cc2f6c71e9994ce35fd471783b6d4746b456082ea3d6f23: Status 404 returned error can't find the container with id c1f8064d7582a1f01cc2f6c71e9994ce35fd471783b6d4746b456082ea3d6f23 Oct 02 11:47:37 crc kubenswrapper[4725]: I1002 11:47:37.284211 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc23aa02-f528-4674-adef-8c0793ac184d" path="/var/lib/kubelet/pods/bc23aa02-f528-4674-adef-8c0793ac184d/volumes" Oct 02 11:47:37 crc kubenswrapper[4725]: I1002 11:47:37.285407 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4" path="/var/lib/kubelet/pods/fcc08588-f9a7-4bb5-bd05-eb7e2e5738a4/volumes" Oct 02 11:47:37 crc kubenswrapper[4725]: I1002 11:47:37.722539 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76bb8577f-p858j" event={"ID":"92f1433d-ba22-410b-b18f-b048e5ac47a7","Type":"ContainerStarted","Data":"37fe2a463e3a6a640f419f8f9c0753526e25ede6cf6a0f6ab4d4c3755fe40183"} Oct 02 11:47:37 crc kubenswrapper[4725]: I1002 11:47:37.722579 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76bb8577f-p858j" event={"ID":"92f1433d-ba22-410b-b18f-b048e5ac47a7","Type":"ContainerStarted","Data":"f52da37de21c78ac50574440138a042b827564ae4e26551bfea533ca32be8ec3"} Oct 02 11:47:37 crc kubenswrapper[4725]: I1002 11:47:37.722590 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76bb8577f-p858j" event={"ID":"92f1433d-ba22-410b-b18f-b048e5ac47a7","Type":"ContainerStarted","Data":"c1f8064d7582a1f01cc2f6c71e9994ce35fd471783b6d4746b456082ea3d6f23"} Oct 02 11:47:37 crc kubenswrapper[4725]: I1002 11:47:37.723853 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:37 crc kubenswrapper[4725]: I1002 11:47:37.723881 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:37 crc kubenswrapper[4725]: I1002 11:47:37.724848 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5327ca9d-2534-452b-8d42-266f5244c6a6","Type":"ContainerStarted","Data":"ca73ed623057cb626ef913635b2081540dda6a8a2cf5766648dc2f3f5ab3a47e"} Oct 02 11:47:37 crc kubenswrapper[4725]: I1002 11:47:37.752123 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-76bb8577f-p858j" podStartSLOduration=2.752099675 podStartE2EDuration="2.752099675s" podCreationTimestamp="2025-10-02 11:47:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:47:37.742996515 +0000 UTC m=+1177.650495978" watchObservedRunningTime="2025-10-02 11:47:37.752099675 +0000 UTC m=+1177.659599138" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.105486 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q2mkt" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.295824 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-config-data\") pod \"050130d8-978e-40e2-9869-ffdbcf50da81\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.295862 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-scripts\") pod \"050130d8-978e-40e2-9869-ffdbcf50da81\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.295887 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050130d8-978e-40e2-9869-ffdbcf50da81-logs\") pod \"050130d8-978e-40e2-9869-ffdbcf50da81\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.295916 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2qwg\" (UniqueName: \"kubernetes.io/projected/050130d8-978e-40e2-9869-ffdbcf50da81-kube-api-access-x2qwg\") pod \"050130d8-978e-40e2-9869-ffdbcf50da81\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.295942 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-combined-ca-bundle\") pod \"050130d8-978e-40e2-9869-ffdbcf50da81\" (UID: \"050130d8-978e-40e2-9869-ffdbcf50da81\") " Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.296554 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050130d8-978e-40e2-9869-ffdbcf50da81-logs" (OuterVolumeSpecName: "logs") pod "050130d8-978e-40e2-9869-ffdbcf50da81" (UID: "050130d8-978e-40e2-9869-ffdbcf50da81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.303007 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050130d8-978e-40e2-9869-ffdbcf50da81-kube-api-access-x2qwg" (OuterVolumeSpecName: "kube-api-access-x2qwg") pod "050130d8-978e-40e2-9869-ffdbcf50da81" (UID: "050130d8-978e-40e2-9869-ffdbcf50da81"). InnerVolumeSpecName "kube-api-access-x2qwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.305902 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-scripts" (OuterVolumeSpecName: "scripts") pod "050130d8-978e-40e2-9869-ffdbcf50da81" (UID: "050130d8-978e-40e2-9869-ffdbcf50da81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.329461 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "050130d8-978e-40e2-9869-ffdbcf50da81" (UID: "050130d8-978e-40e2-9869-ffdbcf50da81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.347132 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-config-data" (OuterVolumeSpecName: "config-data") pod "050130d8-978e-40e2-9869-ffdbcf50da81" (UID: "050130d8-978e-40e2-9869-ffdbcf50da81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.397656 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.397678 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.397688 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050130d8-978e-40e2-9869-ffdbcf50da81-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.397697 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2qwg\" (UniqueName: \"kubernetes.io/projected/050130d8-978e-40e2-9869-ffdbcf50da81-kube-api-access-x2qwg\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.397705 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050130d8-978e-40e2-9869-ffdbcf50da81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.732629 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5327ca9d-2534-452b-8d42-266f5244c6a6","Type":"ContainerStarted","Data":"d096557e0ded2d98cb81fa7facdcc255e5edd6990571f94323aea0ed6805da3c"} Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.734031 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q2mkt" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.734028 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q2mkt" event={"ID":"050130d8-978e-40e2-9869-ffdbcf50da81","Type":"ContainerDied","Data":"17de426e7cea762e39e88a3ca16301f7abfca11e27756a0d2071609dc2c6e76b"} Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.734081 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17de426e7cea762e39e88a3ca16301f7abfca11e27756a0d2071609dc2c6e76b" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.869374 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9b656dd8b-n4tcm"] Oct 02 11:47:38 crc kubenswrapper[4725]: E1002 11:47:38.870847 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050130d8-978e-40e2-9869-ffdbcf50da81" containerName="placement-db-sync" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.870867 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="050130d8-978e-40e2-9869-ffdbcf50da81" containerName="placement-db-sync" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.871079 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="050130d8-978e-40e2-9869-ffdbcf50da81" containerName="placement-db-sync" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.871962 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.878628 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.878847 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jm4cc" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.879094 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.879438 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.879576 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 02 11:47:38 crc kubenswrapper[4725]: I1002 11:47:38.891945 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9b656dd8b-n4tcm"] Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.016369 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-internal-tls-certs\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.016430 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frfl4\" (UniqueName: \"kubernetes.io/projected/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-kube-api-access-frfl4\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.016485 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-public-tls-certs\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.016518 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-combined-ca-bundle\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.016537 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-config-data\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.016561 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-logs\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.016585 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-scripts\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.118593 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-scripts\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.118694 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-internal-tls-certs\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.118767 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frfl4\" (UniqueName: \"kubernetes.io/projected/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-kube-api-access-frfl4\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.118844 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-public-tls-certs\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.118895 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-combined-ca-bundle\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.118918 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-config-data\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.118955 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-logs\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.119413 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-logs\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.125461 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-config-data\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.127312 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-public-tls-certs\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.129368 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-scripts\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.130903 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-internal-tls-certs\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.137889 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-combined-ca-bundle\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.159515 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frfl4\" (UniqueName: \"kubernetes.io/projected/7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a-kube-api-access-frfl4\") pod \"placement-9b656dd8b-n4tcm\" (UID: \"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a\") " pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.211596 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.755588 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5327ca9d-2534-452b-8d42-266f5244c6a6","Type":"ContainerStarted","Data":"0fdf1db59b5dae2c5fbff97335f9813df6221d145b5b7da0f7662dbd4e9a5739"} Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.756568 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5327ca9d-2534-452b-8d42-266f5244c6a6","Type":"ContainerStarted","Data":"b1e50dcbb8e87845339d84d02bb1b2236bf8e25ac4ab541aaf2cc3289e4536cd"} Oct 02 11:47:39 crc kubenswrapper[4725]: I1002 11:47:39.788957 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9b656dd8b-n4tcm"] Oct 02 11:47:41 crc kubenswrapper[4725]: I1002 11:47:41.785917 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:47:41 crc kubenswrapper[4725]: I1002 11:47:41.786802 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="59acc19b-4915-4b1c-b3cb-09ff04ccff58" containerName="glance-log" containerID="cri-o://fd2c86fead6ab5e781e1ca8d31d447730bc68251bb930297796143f89012f556" gracePeriod=30 Oct 02 11:47:41 crc kubenswrapper[4725]: I1002 11:47:41.787171 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="59acc19b-4915-4b1c-b3cb-09ff04ccff58" containerName="glance-httpd" containerID="cri-o://c851c67c39f6eeb589ce1d23a005391a2f1d7445dc839747c5aab409cd5d61c2" gracePeriod=30 Oct 02 11:47:42 crc kubenswrapper[4725]: I1002 11:47:42.790435 4725 generic.go:334] "Generic (PLEG): container finished" podID="59acc19b-4915-4b1c-b3cb-09ff04ccff58" containerID="fd2c86fead6ab5e781e1ca8d31d447730bc68251bb930297796143f89012f556" exitCode=143 Oct 02 11:47:42 crc kubenswrapper[4725]: I1002 11:47:42.790479 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59acc19b-4915-4b1c-b3cb-09ff04ccff58","Type":"ContainerDied","Data":"fd2c86fead6ab5e781e1ca8d31d447730bc68251bb930297796143f89012f556"} Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.378851 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-b725j"] Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.380210 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b725j" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.404669 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-b725j"] Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.506382 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9tzz\" (UniqueName: \"kubernetes.io/projected/ca7b1966-197e-4e08-a162-7a3dd7eab8ed-kube-api-access-w9tzz\") pod \"nova-api-db-create-b725j\" (UID: \"ca7b1966-197e-4e08-a162-7a3dd7eab8ed\") " pod="openstack/nova-api-db-create-b725j" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.571149 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-tkqzt"] Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.572680 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tkqzt" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.587626 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tkqzt"] Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.608969 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9tzz\" (UniqueName: \"kubernetes.io/projected/ca7b1966-197e-4e08-a162-7a3dd7eab8ed-kube-api-access-w9tzz\") pod \"nova-api-db-create-b725j\" (UID: \"ca7b1966-197e-4e08-a162-7a3dd7eab8ed\") " pod="openstack/nova-api-db-create-b725j" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.628933 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9tzz\" (UniqueName: \"kubernetes.io/projected/ca7b1966-197e-4e08-a162-7a3dd7eab8ed-kube-api-access-w9tzz\") pod \"nova-api-db-create-b725j\" (UID: \"ca7b1966-197e-4e08-a162-7a3dd7eab8ed\") " pod="openstack/nova-api-db-create-b725j" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.667872 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-tw4z2"] Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.669307 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tw4z2" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.680092 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-tw4z2"] Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.711519 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q55p\" (UniqueName: \"kubernetes.io/projected/9a932dc2-f106-4272-ad37-091b4b8ec1dc-kube-api-access-7q55p\") pod \"nova-cell0-db-create-tkqzt\" (UID: \"9a932dc2-f106-4272-ad37-091b4b8ec1dc\") " pod="openstack/nova-cell0-db-create-tkqzt" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.731776 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b725j" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.813691 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv62h\" (UniqueName: \"kubernetes.io/projected/c7eefa9c-3d9f-46d7-a47a-9886fd2b120b-kube-api-access-cv62h\") pod \"nova-cell1-db-create-tw4z2\" (UID: \"c7eefa9c-3d9f-46d7-a47a-9886fd2b120b\") " pod="openstack/nova-cell1-db-create-tw4z2" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.813886 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q55p\" (UniqueName: \"kubernetes.io/projected/9a932dc2-f106-4272-ad37-091b4b8ec1dc-kube-api-access-7q55p\") pod \"nova-cell0-db-create-tkqzt\" (UID: \"9a932dc2-f106-4272-ad37-091b4b8ec1dc\") " pod="openstack/nova-cell0-db-create-tkqzt" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.836988 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q55p\" (UniqueName: \"kubernetes.io/projected/9a932dc2-f106-4272-ad37-091b4b8ec1dc-kube-api-access-7q55p\") pod \"nova-cell0-db-create-tkqzt\" (UID: \"9a932dc2-f106-4272-ad37-091b4b8ec1dc\") " pod="openstack/nova-cell0-db-create-tkqzt" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.881292 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.896477 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tkqzt" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.915647 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv62h\" (UniqueName: \"kubernetes.io/projected/c7eefa9c-3d9f-46d7-a47a-9886fd2b120b-kube-api-access-cv62h\") pod \"nova-cell1-db-create-tw4z2\" (UID: \"c7eefa9c-3d9f-46d7-a47a-9886fd2b120b\") " pod="openstack/nova-cell1-db-create-tw4z2" Oct 02 11:47:43 crc kubenswrapper[4725]: I1002 11:47:43.935231 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv62h\" (UniqueName: \"kubernetes.io/projected/c7eefa9c-3d9f-46d7-a47a-9886fd2b120b-kube-api-access-cv62h\") pod \"nova-cell1-db-create-tw4z2\" (UID: \"c7eefa9c-3d9f-46d7-a47a-9886fd2b120b\") " pod="openstack/nova-cell1-db-create-tw4z2" Oct 02 11:47:44 crc kubenswrapper[4725]: I1002 11:47:44.005961 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tw4z2" Oct 02 11:47:44 crc kubenswrapper[4725]: I1002 11:47:44.556076 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.044198 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d64c5b6c4-wjr9t" Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.067519 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.068008 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0e543110-c94c-4470-9114-1d776fba2216" containerName="glance-log" containerID="cri-o://34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16" gracePeriod=30 Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.068097 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0e543110-c94c-4470-9114-1d776fba2216" containerName="glance-httpd" containerID="cri-o://cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48" gracePeriod=30 Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.132438 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5f84679f8-gxltw"] Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.132745 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5f84679f8-gxltw" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api-log" containerID="cri-o://e9af8109036f1e29fd2c829dc79e8444b2a3f125cbd0d24ecfff240a73832651" gracePeriod=30 Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.132967 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5f84679f8-gxltw" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api" containerID="cri-o://e04383081b51b13ba6052f3572c50f1981dd41bbe2509cf90a16f735e37c8d9d" gracePeriod=30 Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.775297 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.777266 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76bb8577f-p858j" Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.830993 4725 generic.go:334] "Generic (PLEG): container finished" podID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerID="e9af8109036f1e29fd2c829dc79e8444b2a3f125cbd0d24ecfff240a73832651" exitCode=143 Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.831086 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f84679f8-gxltw" event={"ID":"3b2fee18-a983-4641-a5f5-73813a5254cd","Type":"ContainerDied","Data":"e9af8109036f1e29fd2c829dc79e8444b2a3f125cbd0d24ecfff240a73832651"} Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.836969 4725 generic.go:334] "Generic (PLEG): container finished" podID="59acc19b-4915-4b1c-b3cb-09ff04ccff58" containerID="c851c67c39f6eeb589ce1d23a005391a2f1d7445dc839747c5aab409cd5d61c2" exitCode=0 Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.837042 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59acc19b-4915-4b1c-b3cb-09ff04ccff58","Type":"ContainerDied","Data":"c851c67c39f6eeb589ce1d23a005391a2f1d7445dc839747c5aab409cd5d61c2"} Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.851814 4725 generic.go:334] "Generic (PLEG): container finished" podID="0e543110-c94c-4470-9114-1d776fba2216" containerID="34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16" exitCode=143 Oct 02 11:47:45 crc kubenswrapper[4725]: I1002 11:47:45.852764 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e543110-c94c-4470-9114-1d776fba2216","Type":"ContainerDied","Data":"34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16"} Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.641635 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.743203 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-config-data\") pod \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.743291 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.743358 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-logs\") pod \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.743421 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-scripts\") pod \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.743464 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-internal-tls-certs\") pod \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.743500 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-combined-ca-bundle\") pod \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.743540 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ws6j\" (UniqueName: \"kubernetes.io/projected/59acc19b-4915-4b1c-b3cb-09ff04ccff58-kube-api-access-9ws6j\") pod \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.743613 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-httpd-run\") pod \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\" (UID: \"59acc19b-4915-4b1c-b3cb-09ff04ccff58\") " Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.744217 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "59acc19b-4915-4b1c-b3cb-09ff04ccff58" (UID: "59acc19b-4915-4b1c-b3cb-09ff04ccff58"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.745986 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-logs" (OuterVolumeSpecName: "logs") pod "59acc19b-4915-4b1c-b3cb-09ff04ccff58" (UID: "59acc19b-4915-4b1c-b3cb-09ff04ccff58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.755878 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-scripts" (OuterVolumeSpecName: "scripts") pod "59acc19b-4915-4b1c-b3cb-09ff04ccff58" (UID: "59acc19b-4915-4b1c-b3cb-09ff04ccff58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.769217 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59acc19b-4915-4b1c-b3cb-09ff04ccff58-kube-api-access-9ws6j" (OuterVolumeSpecName: "kube-api-access-9ws6j") pod "59acc19b-4915-4b1c-b3cb-09ff04ccff58" (UID: "59acc19b-4915-4b1c-b3cb-09ff04ccff58"). InnerVolumeSpecName "kube-api-access-9ws6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.769580 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "59acc19b-4915-4b1c-b3cb-09ff04ccff58" (UID: "59acc19b-4915-4b1c-b3cb-09ff04ccff58"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.845211 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ws6j\" (UniqueName: \"kubernetes.io/projected/59acc19b-4915-4b1c-b3cb-09ff04ccff58-kube-api-access-9ws6j\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.845443 4725 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.845575 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.845650 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59acc19b-4915-4b1c-b3cb-09ff04ccff58-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.845738 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.851490 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-tw4z2"] Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.864414 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tkqzt"] Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.875366 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.875532 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59acc19b-4915-4b1c-b3cb-09ff04ccff58","Type":"ContainerDied","Data":"2aba8e5cfa7bbd6a67cc13596bba6790a72a2fa36e02ab3f3fb2911c238355e6"} Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.875593 4725 scope.go:117] "RemoveContainer" containerID="c851c67c39f6eeb589ce1d23a005391a2f1d7445dc839747c5aab409cd5d61c2" Oct 02 11:47:47 crc kubenswrapper[4725]: W1002 11:47:47.880323 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7eefa9c_3d9f_46d7_a47a_9886fd2b120b.slice/crio-8617b1bd6d0690b5da399df8493f9f95e1324ff3e3b0ed4f04a3c310ef75e8b1 WatchSource:0}: Error finding container 8617b1bd6d0690b5da399df8493f9f95e1324ff3e3b0ed4f04a3c310ef75e8b1: Status 404 returned error can't find the container with id 8617b1bd6d0690b5da399df8493f9f95e1324ff3e3b0ed4f04a3c310ef75e8b1 Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.884211 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b656dd8b-n4tcm" event={"ID":"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a","Type":"ContainerStarted","Data":"62b892e4c7bd1612e3ed2d8ee6887573a86cbded3c9b3003ad67fd28f604ee58"} Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.884264 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b656dd8b-n4tcm" event={"ID":"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a","Type":"ContainerStarted","Data":"8af2b0d79537a898deb05d9d2d23bfe9a52a9640bbf7afa4e4ebf07dc8d95dba"} Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.891226 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a694a92f-563d-41d0-908e-744aec98dd01","Type":"ContainerStarted","Data":"22ccaea29b5cddd09e9cc0084efd71c5f2168e39a22e619f1ae19665de2d1b9f"} Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.916220 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=13.696427678 podStartE2EDuration="27.916203303s" podCreationTimestamp="2025-10-02 11:47:20 +0000 UTC" firstStartedPulling="2025-10-02 11:47:33.05359477 +0000 UTC m=+1172.961094233" lastFinishedPulling="2025-10-02 11:47:47.273370405 +0000 UTC m=+1187.180869858" observedRunningTime="2025-10-02 11:47:47.911324974 +0000 UTC m=+1187.818824437" watchObservedRunningTime="2025-10-02 11:47:47.916203303 +0000 UTC m=+1187.823702776" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.924268 4725 scope.go:117] "RemoveContainer" containerID="fd2c86fead6ab5e781e1ca8d31d447730bc68251bb930297796143f89012f556" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.982021 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59acc19b-4915-4b1c-b3cb-09ff04ccff58" (UID: "59acc19b-4915-4b1c-b3cb-09ff04ccff58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:47 crc kubenswrapper[4725]: I1002 11:47:47.987031 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.009275 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-b725j"] Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.039378 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "59acc19b-4915-4b1c-b3cb-09ff04ccff58" (UID: "59acc19b-4915-4b1c-b3cb-09ff04ccff58"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.050942 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.051430 4725 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.051456 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.057370 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-config-data" (OuterVolumeSpecName: "config-data") pod "59acc19b-4915-4b1c-b3cb-09ff04ccff58" (UID: "59acc19b-4915-4b1c-b3cb-09ff04ccff58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.153218 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59acc19b-4915-4b1c-b3cb-09ff04ccff58-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:48 crc kubenswrapper[4725]: E1002 11:47:48.604009 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a932dc2_f106_4272_ad37_091b4b8ec1dc.slice/crio-9574abc912ac543f89ecab186ce697a96f728d5c8d2c4d788b691fc7f8b48058.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a932dc2_f106_4272_ad37_091b4b8ec1dc.slice/crio-conmon-9574abc912ac543f89ecab186ce697a96f728d5c8d2c4d788b691fc7f8b48058.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7eefa9c_3d9f_46d7_a47a_9886fd2b120b.slice/crio-conmon-39417191c64b57ee831b5a5df0fc5eac8e7f10e43d7060638987b01c3a7aad40.scope\": RecentStats: unable to find data in memory cache]" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.765109 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.774602 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.777124 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.823912 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:47:48 crc kubenswrapper[4725]: E1002 11:47:48.824737 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59acc19b-4915-4b1c-b3cb-09ff04ccff58" containerName="glance-httpd" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.824757 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="59acc19b-4915-4b1c-b3cb-09ff04ccff58" containerName="glance-httpd" Oct 02 11:47:48 crc kubenswrapper[4725]: E1002 11:47:48.824778 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59acc19b-4915-4b1c-b3cb-09ff04ccff58" containerName="glance-log" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.824785 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="59acc19b-4915-4b1c-b3cb-09ff04ccff58" containerName="glance-log" Oct 02 11:47:48 crc kubenswrapper[4725]: E1002 11:47:48.824826 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e543110-c94c-4470-9114-1d776fba2216" containerName="glance-httpd" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.824835 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e543110-c94c-4470-9114-1d776fba2216" containerName="glance-httpd" Oct 02 11:47:48 crc kubenswrapper[4725]: E1002 11:47:48.824878 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e543110-c94c-4470-9114-1d776fba2216" containerName="glance-log" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.824886 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e543110-c94c-4470-9114-1d776fba2216" containerName="glance-log" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.825298 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="59acc19b-4915-4b1c-b3cb-09ff04ccff58" containerName="glance-log" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.825320 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e543110-c94c-4470-9114-1d776fba2216" containerName="glance-httpd" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.825348 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e543110-c94c-4470-9114-1d776fba2216" containerName="glance-log" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.825375 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="59acc19b-4915-4b1c-b3cb-09ff04ccff58" containerName="glance-httpd" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.835163 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.839617 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.840600 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.894218 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.915580 4725 generic.go:334] "Generic (PLEG): container finished" podID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerID="e04383081b51b13ba6052f3572c50f1981dd41bbe2509cf90a16f735e37c8d9d" exitCode=0 Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.915670 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f84679f8-gxltw" event={"ID":"3b2fee18-a983-4641-a5f5-73813a5254cd","Type":"ContainerDied","Data":"e04383081b51b13ba6052f3572c50f1981dd41bbe2509cf90a16f735e37c8d9d"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.917574 4725 generic.go:334] "Generic (PLEG): container finished" podID="c7eefa9c-3d9f-46d7-a47a-9886fd2b120b" containerID="39417191c64b57ee831b5a5df0fc5eac8e7f10e43d7060638987b01c3a7aad40" exitCode=0 Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.917714 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tw4z2" event={"ID":"c7eefa9c-3d9f-46d7-a47a-9886fd2b120b","Type":"ContainerDied","Data":"39417191c64b57ee831b5a5df0fc5eac8e7f10e43d7060638987b01c3a7aad40"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.917799 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tw4z2" event={"ID":"c7eefa9c-3d9f-46d7-a47a-9886fd2b120b","Type":"ContainerStarted","Data":"8617b1bd6d0690b5da399df8493f9f95e1324ff3e3b0ed4f04a3c310ef75e8b1"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.919606 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b725j" event={"ID":"ca7b1966-197e-4e08-a162-7a3dd7eab8ed","Type":"ContainerStarted","Data":"a3346e71f4700662902bdc8bf01640483cfcd2d2057eb988366f95c656c5e8fb"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.919689 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b725j" event={"ID":"ca7b1966-197e-4e08-a162-7a3dd7eab8ed","Type":"ContainerStarted","Data":"5e7f6bc1034d0dd4dbf299c6d84e8d7f37c08c068b7b104e4a9130562a03f963"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.921873 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5798d58dff-jkj6h" event={"ID":"2ba9160b-539e-40a1-8d2f-4cb0f25e4084","Type":"ContainerStarted","Data":"ba637dc27b43e0666ae837950c220feca757336e31edea326c28f357e8c17d80"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.921978 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5798d58dff-jkj6h" event={"ID":"2ba9160b-539e-40a1-8d2f-4cb0f25e4084","Type":"ContainerStarted","Data":"c88e6f72c7c2a594c630de27a6938403c6e2304f0686a75b010baa09e90e8729"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.936830 4725 generic.go:334] "Generic (PLEG): container finished" podID="9a932dc2-f106-4272-ad37-091b4b8ec1dc" containerID="9574abc912ac543f89ecab186ce697a96f728d5c8d2c4d788b691fc7f8b48058" exitCode=0 Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.937061 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tkqzt" event={"ID":"9a932dc2-f106-4272-ad37-091b4b8ec1dc","Type":"ContainerDied","Data":"9574abc912ac543f89ecab186ce697a96f728d5c8d2c4d788b691fc7f8b48058"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.937139 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tkqzt" event={"ID":"9a932dc2-f106-4272-ad37-091b4b8ec1dc","Type":"ContainerStarted","Data":"35203391fe3f877a2bb0bbe5ad4e6fd35868305bc6b411db5861a5b1ca7fbda3"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.953487 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="ceilometer-central-agent" containerID="cri-o://d096557e0ded2d98cb81fa7facdcc255e5edd6990571f94323aea0ed6805da3c" gracePeriod=30 Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.954231 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5798d58dff-jkj6h" podStartSLOduration=2.94075306 podStartE2EDuration="49.95420209s" podCreationTimestamp="2025-10-02 11:46:59 +0000 UTC" firstStartedPulling="2025-10-02 11:47:00.445659848 +0000 UTC m=+1140.353159311" lastFinishedPulling="2025-10-02 11:47:47.459108878 +0000 UTC m=+1187.366608341" observedRunningTime="2025-10-02 11:47:48.954114028 +0000 UTC m=+1188.861613491" watchObservedRunningTime="2025-10-02 11:47:48.95420209 +0000 UTC m=+1188.861701553" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.954030 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5327ca9d-2534-452b-8d42-266f5244c6a6","Type":"ContainerStarted","Data":"777995a92e4160ae2ec5cfc47338c05817238f06f8b62cf2ab9030148f32fdd9"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.955033 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.954142 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="ceilometer-notification-agent" containerID="cri-o://b1e50dcbb8e87845339d84d02bb1b2236bf8e25ac4ab541aaf2cc3289e4536cd" gracePeriod=30 Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.954110 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="proxy-httpd" containerID="cri-o://777995a92e4160ae2ec5cfc47338c05817238f06f8b62cf2ab9030148f32fdd9" gracePeriod=30 Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.954132 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="sg-core" containerID="cri-o://0fdf1db59b5dae2c5fbff97335f9813df6221d145b5b7da0f7662dbd4e9a5739" gracePeriod=30 Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.968309 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-scripts\") pod \"0e543110-c94c-4470-9114-1d776fba2216\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.971617 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b656dd8b-n4tcm" event={"ID":"7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a","Type":"ContainerStarted","Data":"417d13fbab363cca156922fbd39d21c5e30e600efc67ee889e2d38025bee8e5c"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.986648 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.986678 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.986709 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e543110-c94c-4470-9114-1d776fba2216","Type":"ContainerDied","Data":"cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.986756 4725 scope.go:117] "RemoveContainer" containerID="cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.985486 4725 generic.go:334] "Generic (PLEG): container finished" podID="0e543110-c94c-4470-9114-1d776fba2216" containerID="cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48" exitCode=0 Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.985577 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.987003 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e543110-c94c-4470-9114-1d776fba2216","Type":"ContainerDied","Data":"ff95e27fd796ef34d52eeabd8c396650357a29d0d2e15ef6a4656b4c0f86ad98"} Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.974928 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-scripts" (OuterVolumeSpecName: "scripts") pod "0e543110-c94c-4470-9114-1d776fba2216" (UID: "0e543110-c94c-4470-9114-1d776fba2216"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.986533 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8sck\" (UniqueName: \"kubernetes.io/projected/0e543110-c94c-4470-9114-1d776fba2216-kube-api-access-l8sck\") pod \"0e543110-c94c-4470-9114-1d776fba2216\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.987132 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-httpd-run\") pod \"0e543110-c94c-4470-9114-1d776fba2216\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.987173 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"0e543110-c94c-4470-9114-1d776fba2216\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.987219 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-config-data\") pod \"0e543110-c94c-4470-9114-1d776fba2216\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.987247 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-combined-ca-bundle\") pod \"0e543110-c94c-4470-9114-1d776fba2216\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.987304 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-logs\") pod \"0e543110-c94c-4470-9114-1d776fba2216\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.987333 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-public-tls-certs\") pod \"0e543110-c94c-4470-9114-1d776fba2216\" (UID: \"0e543110-c94c-4470-9114-1d776fba2216\") " Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.990650 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-scripts\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.990700 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxc5c\" (UniqueName: \"kubernetes.io/projected/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-kube-api-access-pxc5c\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.990736 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.990788 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.990980 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-config-data\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.991008 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.991610 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-logs\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.991649 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.991965 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.993796 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0e543110-c94c-4470-9114-1d776fba2216" (UID: "0e543110-c94c-4470-9114-1d776fba2216"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.994091 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e543110-c94c-4470-9114-1d776fba2216-kube-api-access-l8sck" (OuterVolumeSpecName: "kube-api-access-l8sck") pod "0e543110-c94c-4470-9114-1d776fba2216" (UID: "0e543110-c94c-4470-9114-1d776fba2216"). InnerVolumeSpecName "kube-api-access-l8sck". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.994126 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-logs" (OuterVolumeSpecName: "logs") pod "0e543110-c94c-4470-9114-1d776fba2216" (UID: "0e543110-c94c-4470-9114-1d776fba2216"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:48 crc kubenswrapper[4725]: I1002 11:47:48.994797 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "0e543110-c94c-4470-9114-1d776fba2216" (UID: "0e543110-c94c-4470-9114-1d776fba2216"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.003126 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-b725j" podStartSLOduration=6.003109021 podStartE2EDuration="6.003109021s" podCreationTimestamp="2025-10-02 11:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:47:48.992058419 +0000 UTC m=+1188.899557892" watchObservedRunningTime="2025-10-02 11:47:49.003109021 +0000 UTC m=+1188.910608484" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.028679 4725 generic.go:334] "Generic (PLEG): container finished" podID="804fa613-f386-41a1-975e-835525211cb3" containerID="b7e9b1ee574a8b19eb2a80b250dab80ed08ca9f07c9e584acf0e8fc05d53ed65" exitCode=0 Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.029078 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-56t7s" event={"ID":"804fa613-f386-41a1-975e-835525211cb3","Type":"ContainerDied","Data":"b7e9b1ee574a8b19eb2a80b250dab80ed08ca9f07c9e584acf0e8fc05d53ed65"} Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.045048 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9b656dd8b-n4tcm" podStartSLOduration=11.045027707 podStartE2EDuration="11.045027707s" podCreationTimestamp="2025-10-02 11:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:47:49.023475048 +0000 UTC m=+1188.930974511" watchObservedRunningTime="2025-10-02 11:47:49.045027707 +0000 UTC m=+1188.952527170" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.093847 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-scripts\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.094120 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxc5c\" (UniqueName: \"kubernetes.io/projected/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-kube-api-access-pxc5c\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.094147 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.094183 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.094258 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-config-data\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.094279 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.094357 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-logs\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.094383 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.094462 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8sck\" (UniqueName: \"kubernetes.io/projected/0e543110-c94c-4470-9114-1d776fba2216-kube-api-access-l8sck\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.094474 4725 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.094498 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.094510 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e543110-c94c-4470-9114-1d776fba2216-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.096300 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e543110-c94c-4470-9114-1d776fba2216" (UID: "0e543110-c94c-4470-9114-1d776fba2216"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.096497 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.381894167 podStartE2EDuration="14.096483625s" podCreationTimestamp="2025-10-02 11:47:35 +0000 UTC" firstStartedPulling="2025-10-02 11:47:36.723221558 +0000 UTC m=+1176.630721031" lastFinishedPulling="2025-10-02 11:47:47.437811026 +0000 UTC m=+1187.345310489" observedRunningTime="2025-10-02 11:47:49.069527214 +0000 UTC m=+1188.977026677" watchObservedRunningTime="2025-10-02 11:47:49.096483625 +0000 UTC m=+1189.003983088" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.096898 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.105946 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.105974 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-logs\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.110950 4725 scope.go:117] "RemoveContainer" containerID="34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.118361 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-config-data" (OuterVolumeSpecName: "config-data") pod "0e543110-c94c-4470-9114-1d776fba2216" (UID: "0e543110-c94c-4470-9114-1d776fba2216"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.122837 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-config-data\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.132334 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.136086 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.143243 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-scripts\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.143794 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxc5c\" (UniqueName: \"kubernetes.io/projected/55c8573f-3cb6-4d8c-8b84-dfa5f6221f42-kube-api-access-pxc5c\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.151780 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0e543110-c94c-4470-9114-1d776fba2216" (UID: "0e543110-c94c-4470-9114-1d776fba2216"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.176344 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42\") " pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.194093 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.197636 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.197670 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.197680 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.197690 4725 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e543110-c94c-4470-9114-1d776fba2216-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.284258 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.284455 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59acc19b-4915-4b1c-b3cb-09ff04ccff58" path="/var/lib/kubelet/pods/59acc19b-4915-4b1c-b3cb-09ff04ccff58/volumes" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.286911 4725 scope.go:117] "RemoveContainer" containerID="cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48" Oct 02 11:47:49 crc kubenswrapper[4725]: E1002 11:47:49.287473 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48\": container with ID starting with cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48 not found: ID does not exist" containerID="cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.287522 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48"} err="failed to get container status \"cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48\": rpc error: code = NotFound desc = could not find container \"cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48\": container with ID starting with cdd1b5995551ef5763d9394184b725f92aabf1deb20ca71b1421527de5caeb48 not found: ID does not exist" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.287551 4725 scope.go:117] "RemoveContainer" containerID="34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16" Oct 02 11:47:49 crc kubenswrapper[4725]: E1002 11:47:49.288662 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16\": container with ID starting with 34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16 not found: ID does not exist" containerID="34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.288697 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16"} err="failed to get container status \"34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16\": rpc error: code = NotFound desc = could not find container \"34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16\": container with ID starting with 34e82be31f232c879421e3a712aee1445254b77036bef7960c5cb68a71a8fd16 not found: ID does not exist" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.298462 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data-custom\") pod \"3b2fee18-a983-4641-a5f5-73813a5254cd\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.298527 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b2fee18-a983-4641-a5f5-73813a5254cd-logs\") pod \"3b2fee18-a983-4641-a5f5-73813a5254cd\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.298546 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7bqh\" (UniqueName: \"kubernetes.io/projected/3b2fee18-a983-4641-a5f5-73813a5254cd-kube-api-access-x7bqh\") pod \"3b2fee18-a983-4641-a5f5-73813a5254cd\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.298625 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-combined-ca-bundle\") pod \"3b2fee18-a983-4641-a5f5-73813a5254cd\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.298677 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data\") pod \"3b2fee18-a983-4641-a5f5-73813a5254cd\" (UID: \"3b2fee18-a983-4641-a5f5-73813a5254cd\") " Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.307852 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b2fee18-a983-4641-a5f5-73813a5254cd-logs" (OuterVolumeSpecName: "logs") pod "3b2fee18-a983-4641-a5f5-73813a5254cd" (UID: "3b2fee18-a983-4641-a5f5-73813a5254cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.308580 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3b2fee18-a983-4641-a5f5-73813a5254cd" (UID: "3b2fee18-a983-4641-a5f5-73813a5254cd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.325379 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2fee18-a983-4641-a5f5-73813a5254cd-kube-api-access-x7bqh" (OuterVolumeSpecName: "kube-api-access-x7bqh") pod "3b2fee18-a983-4641-a5f5-73813a5254cd" (UID: "3b2fee18-a983-4641-a5f5-73813a5254cd"). InnerVolumeSpecName "kube-api-access-x7bqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.333860 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b2fee18-a983-4641-a5f5-73813a5254cd" (UID: "3b2fee18-a983-4641-a5f5-73813a5254cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.400697 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.400736 4725 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.400746 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b2fee18-a983-4641-a5f5-73813a5254cd-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.400754 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7bqh\" (UniqueName: \"kubernetes.io/projected/3b2fee18-a983-4641-a5f5-73813a5254cd-kube-api-access-x7bqh\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.410051 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.419403 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data" (OuterVolumeSpecName: "config-data") pod "3b2fee18-a983-4641-a5f5-73813a5254cd" (UID: "3b2fee18-a983-4641-a5f5-73813a5254cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.427788 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.437640 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:47:49 crc kubenswrapper[4725]: E1002 11:47:49.438024 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api-log" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.438042 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api-log" Oct 02 11:47:49 crc kubenswrapper[4725]: E1002 11:47:49.438079 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.438085 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.438256 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.438281 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" containerName="barbican-api-log" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.450799 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.452696 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.456149 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.456687 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.471587 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.501831 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/138ed7a6-24d0-4071-b142-ece9a296eb65-logs\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.501887 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-scripts\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.502001 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.502044 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.502087 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-config-data\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.502153 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/138ed7a6-24d0-4071-b142-ece9a296eb65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.502177 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.502284 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfktr\" (UniqueName: \"kubernetes.io/projected/138ed7a6-24d0-4071-b142-ece9a296eb65-kube-api-access-mfktr\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.502437 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2fee18-a983-4641-a5f5-73813a5254cd-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.606836 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.606893 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.606930 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-config-data\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.606984 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/138ed7a6-24d0-4071-b142-ece9a296eb65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.607007 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.607046 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfktr\" (UniqueName: \"kubernetes.io/projected/138ed7a6-24d0-4071-b142-ece9a296eb65-kube-api-access-mfktr\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.607083 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/138ed7a6-24d0-4071-b142-ece9a296eb65-logs\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.607098 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-scripts\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.607117 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.610950 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/138ed7a6-24d0-4071-b142-ece9a296eb65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.613709 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/138ed7a6-24d0-4071-b142-ece9a296eb65-logs\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.617123 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.617213 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-config-data\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.620877 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.625323 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/138ed7a6-24d0-4071-b142-ece9a296eb65-scripts\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.630586 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfktr\" (UniqueName: \"kubernetes.io/projected/138ed7a6-24d0-4071-b142-ece9a296eb65-kube-api-access-mfktr\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.650183 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"138ed7a6-24d0-4071-b142-ece9a296eb65\") " pod="openstack/glance-default-external-api-0" Oct 02 11:47:49 crc kubenswrapper[4725]: I1002 11:47:49.778493 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.052885 4725 generic.go:334] "Generic (PLEG): container finished" podID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerID="777995a92e4160ae2ec5cfc47338c05817238f06f8b62cf2ab9030148f32fdd9" exitCode=0 Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.053170 4725 generic.go:334] "Generic (PLEG): container finished" podID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerID="0fdf1db59b5dae2c5fbff97335f9813df6221d145b5b7da0f7662dbd4e9a5739" exitCode=2 Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.053181 4725 generic.go:334] "Generic (PLEG): container finished" podID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerID="b1e50dcbb8e87845339d84d02bb1b2236bf8e25ac4ab541aaf2cc3289e4536cd" exitCode=0 Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.053188 4725 generic.go:334] "Generic (PLEG): container finished" podID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerID="d096557e0ded2d98cb81fa7facdcc255e5edd6990571f94323aea0ed6805da3c" exitCode=0 Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.053000 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5327ca9d-2534-452b-8d42-266f5244c6a6","Type":"ContainerDied","Data":"777995a92e4160ae2ec5cfc47338c05817238f06f8b62cf2ab9030148f32fdd9"} Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.053233 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5327ca9d-2534-452b-8d42-266f5244c6a6","Type":"ContainerDied","Data":"0fdf1db59b5dae2c5fbff97335f9813df6221d145b5b7da0f7662dbd4e9a5739"} Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.053254 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5327ca9d-2534-452b-8d42-266f5244c6a6","Type":"ContainerDied","Data":"b1e50dcbb8e87845339d84d02bb1b2236bf8e25ac4ab541aaf2cc3289e4536cd"} Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.053266 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5327ca9d-2534-452b-8d42-266f5244c6a6","Type":"ContainerDied","Data":"d096557e0ded2d98cb81fa7facdcc255e5edd6990571f94323aea0ed6805da3c"} Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.059718 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f84679f8-gxltw" event={"ID":"3b2fee18-a983-4641-a5f5-73813a5254cd","Type":"ContainerDied","Data":"f6080eb28e46322a301a72aff325eb6294e6be5dd73e9f818cdde475d28cc580"} Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.059786 4725 scope.go:117] "RemoveContainer" containerID="e04383081b51b13ba6052f3572c50f1981dd41bbe2509cf90a16f735e37c8d9d" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.059870 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f84679f8-gxltw" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.072523 4725 generic.go:334] "Generic (PLEG): container finished" podID="ca7b1966-197e-4e08-a162-7a3dd7eab8ed" containerID="a3346e71f4700662902bdc8bf01640483cfcd2d2057eb988366f95c656c5e8fb" exitCode=0 Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.072683 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b725j" event={"ID":"ca7b1966-197e-4e08-a162-7a3dd7eab8ed","Type":"ContainerDied","Data":"a3346e71f4700662902bdc8bf01640483cfcd2d2057eb988366f95c656c5e8fb"} Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.100177 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.139000 4725 scope.go:117] "RemoveContainer" containerID="e9af8109036f1e29fd2c829dc79e8444b2a3f125cbd0d24ecfff240a73832651" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.146794 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5f84679f8-gxltw"] Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.159480 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5f84679f8-gxltw"] Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.403104 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.585949 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.624580 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tw4z2" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.626537 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-56t7s" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.633562 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-config-data\") pod \"5327ca9d-2534-452b-8d42-266f5244c6a6\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.633622 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-log-httpd\") pod \"5327ca9d-2534-452b-8d42-266f5244c6a6\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.633670 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-combined-ca-bundle\") pod \"5327ca9d-2534-452b-8d42-266f5244c6a6\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.633749 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/804fa613-f386-41a1-975e-835525211cb3-etc-machine-id\") pod \"804fa613-f386-41a1-975e-835525211cb3\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.633803 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-config-data\") pod \"804fa613-f386-41a1-975e-835525211cb3\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.633836 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-scripts\") pod \"5327ca9d-2534-452b-8d42-266f5244c6a6\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.633862 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dprvn\" (UniqueName: \"kubernetes.io/projected/5327ca9d-2534-452b-8d42-266f5244c6a6-kube-api-access-dprvn\") pod \"5327ca9d-2534-452b-8d42-266f5244c6a6\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.633885 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwlj7\" (UniqueName: \"kubernetes.io/projected/804fa613-f386-41a1-975e-835525211cb3-kube-api-access-bwlj7\") pod \"804fa613-f386-41a1-975e-835525211cb3\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.633947 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-combined-ca-bundle\") pod \"804fa613-f386-41a1-975e-835525211cb3\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.633968 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-sg-core-conf-yaml\") pod \"5327ca9d-2534-452b-8d42-266f5244c6a6\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.633996 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-run-httpd\") pod \"5327ca9d-2534-452b-8d42-266f5244c6a6\" (UID: \"5327ca9d-2534-452b-8d42-266f5244c6a6\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.634019 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-db-sync-config-data\") pod \"804fa613-f386-41a1-975e-835525211cb3\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.634045 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-scripts\") pod \"804fa613-f386-41a1-975e-835525211cb3\" (UID: \"804fa613-f386-41a1-975e-835525211cb3\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.634089 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv62h\" (UniqueName: \"kubernetes.io/projected/c7eefa9c-3d9f-46d7-a47a-9886fd2b120b-kube-api-access-cv62h\") pod \"c7eefa9c-3d9f-46d7-a47a-9886fd2b120b\" (UID: \"c7eefa9c-3d9f-46d7-a47a-9886fd2b120b\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.643484 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7eefa9c-3d9f-46d7-a47a-9886fd2b120b-kube-api-access-cv62h" (OuterVolumeSpecName: "kube-api-access-cv62h") pod "c7eefa9c-3d9f-46d7-a47a-9886fd2b120b" (UID: "c7eefa9c-3d9f-46d7-a47a-9886fd2b120b"). InnerVolumeSpecName "kube-api-access-cv62h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.643913 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5327ca9d-2534-452b-8d42-266f5244c6a6" (UID: "5327ca9d-2534-452b-8d42-266f5244c6a6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.644195 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/804fa613-f386-41a1-975e-835525211cb3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "804fa613-f386-41a1-975e-835525211cb3" (UID: "804fa613-f386-41a1-975e-835525211cb3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.644203 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5327ca9d-2534-452b-8d42-266f5244c6a6-kube-api-access-dprvn" (OuterVolumeSpecName: "kube-api-access-dprvn") pod "5327ca9d-2534-452b-8d42-266f5244c6a6" (UID: "5327ca9d-2534-452b-8d42-266f5244c6a6"). InnerVolumeSpecName "kube-api-access-dprvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.644815 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5327ca9d-2534-452b-8d42-266f5244c6a6" (UID: "5327ca9d-2534-452b-8d42-266f5244c6a6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.653664 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "804fa613-f386-41a1-975e-835525211cb3" (UID: "804fa613-f386-41a1-975e-835525211cb3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.659875 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-scripts" (OuterVolumeSpecName: "scripts") pod "5327ca9d-2534-452b-8d42-266f5244c6a6" (UID: "5327ca9d-2534-452b-8d42-266f5244c6a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.664188 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804fa613-f386-41a1-975e-835525211cb3-kube-api-access-bwlj7" (OuterVolumeSpecName: "kube-api-access-bwlj7") pod "804fa613-f386-41a1-975e-835525211cb3" (UID: "804fa613-f386-41a1-975e-835525211cb3"). InnerVolumeSpecName "kube-api-access-bwlj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.666880 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-scripts" (OuterVolumeSpecName: "scripts") pod "804fa613-f386-41a1-975e-835525211cb3" (UID: "804fa613-f386-41a1-975e-835525211cb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.701362 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "804fa613-f386-41a1-975e-835525211cb3" (UID: "804fa613-f386-41a1-975e-835525211cb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.710474 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tkqzt" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.714942 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5327ca9d-2534-452b-8d42-266f5244c6a6" (UID: "5327ca9d-2534-452b-8d42-266f5244c6a6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.735934 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q55p\" (UniqueName: \"kubernetes.io/projected/9a932dc2-f106-4272-ad37-091b4b8ec1dc-kube-api-access-7q55p\") pod \"9a932dc2-f106-4272-ad37-091b4b8ec1dc\" (UID: \"9a932dc2-f106-4272-ad37-091b4b8ec1dc\") " Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.736472 4725 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.736493 4725 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/804fa613-f386-41a1-975e-835525211cb3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.736507 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.736523 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dprvn\" (UniqueName: \"kubernetes.io/projected/5327ca9d-2534-452b-8d42-266f5244c6a6-kube-api-access-dprvn\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.736535 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwlj7\" (UniqueName: \"kubernetes.io/projected/804fa613-f386-41a1-975e-835525211cb3-kube-api-access-bwlj7\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.736546 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.736555 4725 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.736565 4725 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5327ca9d-2534-452b-8d42-266f5244c6a6-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.736575 4725 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.736585 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.736598 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv62h\" (UniqueName: \"kubernetes.io/projected/c7eefa9c-3d9f-46d7-a47a-9886fd2b120b-kube-api-access-cv62h\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.744438 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a932dc2-f106-4272-ad37-091b4b8ec1dc-kube-api-access-7q55p" (OuterVolumeSpecName: "kube-api-access-7q55p") pod "9a932dc2-f106-4272-ad37-091b4b8ec1dc" (UID: "9a932dc2-f106-4272-ad37-091b4b8ec1dc"). InnerVolumeSpecName "kube-api-access-7q55p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.787037 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-config-data" (OuterVolumeSpecName: "config-data") pod "804fa613-f386-41a1-975e-835525211cb3" (UID: "804fa613-f386-41a1-975e-835525211cb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.789860 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5327ca9d-2534-452b-8d42-266f5244c6a6" (UID: "5327ca9d-2534-452b-8d42-266f5244c6a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.804389 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-config-data" (OuterVolumeSpecName: "config-data") pod "5327ca9d-2534-452b-8d42-266f5244c6a6" (UID: "5327ca9d-2534-452b-8d42-266f5244c6a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.838231 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804fa613-f386-41a1-975e-835525211cb3-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.838265 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q55p\" (UniqueName: \"kubernetes.io/projected/9a932dc2-f106-4272-ad37-091b4b8ec1dc-kube-api-access-7q55p\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.838276 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:50 crc kubenswrapper[4725]: I1002 11:47:50.838285 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5327ca9d-2534-452b-8d42-266f5244c6a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.098853 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tw4z2" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.098933 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tw4z2" event={"ID":"c7eefa9c-3d9f-46d7-a47a-9886fd2b120b","Type":"ContainerDied","Data":"8617b1bd6d0690b5da399df8493f9f95e1324ff3e3b0ed4f04a3c310ef75e8b1"} Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.099326 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8617b1bd6d0690b5da399df8493f9f95e1324ff3e3b0ed4f04a3c310ef75e8b1" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.102016 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tkqzt" event={"ID":"9a932dc2-f106-4272-ad37-091b4b8ec1dc","Type":"ContainerDied","Data":"35203391fe3f877a2bb0bbe5ad4e6fd35868305bc6b411db5861a5b1ca7fbda3"} Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.102063 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35203391fe3f877a2bb0bbe5ad4e6fd35868305bc6b411db5861a5b1ca7fbda3" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.102141 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tkqzt" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.110242 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-56t7s" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.110235 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-56t7s" event={"ID":"804fa613-f386-41a1-975e-835525211cb3","Type":"ContainerDied","Data":"09387a217209b320b2476fab6e3558416e6541456518f9683ed8e01ab4d57cd5"} Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.110375 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09387a217209b320b2476fab6e3558416e6541456518f9683ed8e01ab4d57cd5" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.125644 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5327ca9d-2534-452b-8d42-266f5244c6a6","Type":"ContainerDied","Data":"ca73ed623057cb626ef913635b2081540dda6a8a2cf5766648dc2f3f5ab3a47e"} Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.125695 4725 scope.go:117] "RemoveContainer" containerID="777995a92e4160ae2ec5cfc47338c05817238f06f8b62cf2ab9030148f32fdd9" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.125648 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.132309 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"138ed7a6-24d0-4071-b142-ece9a296eb65","Type":"ContainerStarted","Data":"7b788b773e6cff378b600ce35e3ecda2733c4cb8b8196c2082466c5332d78936"} Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.136626 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42","Type":"ContainerStarted","Data":"8badb42035d57ad5e4de74a2d87e7e7df18e9f9ff88196a1565c6e1ba980614f"} Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.136677 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42","Type":"ContainerStarted","Data":"523c3d9ff59dd705bbbb8f0a3a7e5d39a51b6febac96ead68b8246438aa9f5c9"} Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.332629 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e543110-c94c-4470-9114-1d776fba2216" path="/var/lib/kubelet/pods/0e543110-c94c-4470-9114-1d776fba2216/volumes" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.333549 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b2fee18-a983-4641-a5f5-73813a5254cd" path="/var/lib/kubelet/pods/3b2fee18-a983-4641-a5f5-73813a5254cd/volumes" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.346989 4725 scope.go:117] "RemoveContainer" containerID="0fdf1db59b5dae2c5fbff97335f9813df6221d145b5b7da0f7662dbd4e9a5739" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433205 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 11:47:51 crc kubenswrapper[4725]: E1002 11:47:51.433574 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="ceilometer-notification-agent" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433586 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="ceilometer-notification-agent" Oct 02 11:47:51 crc kubenswrapper[4725]: E1002 11:47:51.433612 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804fa613-f386-41a1-975e-835525211cb3" containerName="cinder-db-sync" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433618 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="804fa613-f386-41a1-975e-835525211cb3" containerName="cinder-db-sync" Oct 02 11:47:51 crc kubenswrapper[4725]: E1002 11:47:51.433628 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="sg-core" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433634 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="sg-core" Oct 02 11:47:51 crc kubenswrapper[4725]: E1002 11:47:51.433649 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="proxy-httpd" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433655 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="proxy-httpd" Oct 02 11:47:51 crc kubenswrapper[4725]: E1002 11:47:51.433676 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a932dc2-f106-4272-ad37-091b4b8ec1dc" containerName="mariadb-database-create" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433681 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a932dc2-f106-4272-ad37-091b4b8ec1dc" containerName="mariadb-database-create" Oct 02 11:47:51 crc kubenswrapper[4725]: E1002 11:47:51.433693 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="ceilometer-central-agent" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433701 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="ceilometer-central-agent" Oct 02 11:47:51 crc kubenswrapper[4725]: E1002 11:47:51.433707 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7eefa9c-3d9f-46d7-a47a-9886fd2b120b" containerName="mariadb-database-create" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433712 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7eefa9c-3d9f-46d7-a47a-9886fd2b120b" containerName="mariadb-database-create" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433904 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="ceilometer-notification-agent" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433922 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="proxy-httpd" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433933 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="ceilometer-central-agent" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433947 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="804fa613-f386-41a1-975e-835525211cb3" containerName="cinder-db-sync" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433962 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a932dc2-f106-4272-ad37-091b4b8ec1dc" containerName="mariadb-database-create" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433979 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" containerName="sg-core" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.433987 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7eefa9c-3d9f-46d7-a47a-9886fd2b120b" containerName="mariadb-database-create" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.435103 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.435282 4725 scope.go:117] "RemoveContainer" containerID="b1e50dcbb8e87845339d84d02bb1b2236bf8e25ac4ab541aaf2cc3289e4536cd" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.441872 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.451631 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w7pj7" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.453992 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.455322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.455543 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.455639 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.455750 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.455892 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec237bcc-311d-4f76-9a83-ae08624ed97a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.456084 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkqhl\" (UniqueName: \"kubernetes.io/projected/ec237bcc-311d-4f76-9a83-ae08624ed97a-kube-api-access-wkqhl\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.464447 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.464752 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.495499 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.501990 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.503161 4725 scope.go:117] "RemoveContainer" containerID="d096557e0ded2d98cb81fa7facdcc255e5edd6990571f94323aea0ed6805da3c" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.509639 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.512154 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.518396 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wzmzv"] Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.518721 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.518894 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.526116 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558150 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-scripts\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558204 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-config-data\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558236 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558253 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558272 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558282 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558289 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558314 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qprhz\" (UniqueName: \"kubernetes.io/projected/fab4c603-5b7c-4538-8630-64e6eabc1b9a-kube-api-access-qprhz\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558337 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558652 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558753 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-config\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558798 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558815 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558832 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5dgh\" (UniqueName: \"kubernetes.io/projected/f37a2635-e342-497f-95af-54b5968a4daf-kube-api-access-p5dgh\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558851 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-run-httpd\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558870 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-log-httpd\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558896 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec237bcc-311d-4f76-9a83-ae08624ed97a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558922 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.558952 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkqhl\" (UniqueName: \"kubernetes.io/projected/ec237bcc-311d-4f76-9a83-ae08624ed97a-kube-api-access-wkqhl\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.559664 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec237bcc-311d-4f76-9a83-ae08624ed97a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.563686 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.576883 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.577949 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.578535 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.587396 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkqhl\" (UniqueName: \"kubernetes.io/projected/ec237bcc-311d-4f76-9a83-ae08624ed97a-kube-api-access-wkqhl\") pod \"cinder-scheduler-0\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.592757 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wzmzv"] Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660479 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660561 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-scripts\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660591 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-config-data\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660616 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660631 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660647 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660669 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qprhz\" (UniqueName: \"kubernetes.io/projected/fab4c603-5b7c-4538-8630-64e6eabc1b9a-kube-api-access-qprhz\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660687 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660707 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660763 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-config\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660793 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5dgh\" (UniqueName: \"kubernetes.io/projected/f37a2635-e342-497f-95af-54b5968a4daf-kube-api-access-p5dgh\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660813 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-run-httpd\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.660833 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-log-httpd\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.661182 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-log-httpd\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.661894 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.668026 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-scripts\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.672456 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-config-data\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.672702 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.673122 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.673310 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.673544 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-run-httpd\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.673946 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-config\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.675987 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b725j" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.681246 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.692106 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.693568 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 02 11:47:51 crc kubenswrapper[4725]: E1002 11:47:51.694078 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca7b1966-197e-4e08-a162-7a3dd7eab8ed" containerName="mariadb-database-create" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.694180 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca7b1966-197e-4e08-a162-7a3dd7eab8ed" containerName="mariadb-database-create" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.694462 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca7b1966-197e-4e08-a162-7a3dd7eab8ed" containerName="mariadb-database-create" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.695382 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.697908 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.700953 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5dgh\" (UniqueName: \"kubernetes.io/projected/f37a2635-e342-497f-95af-54b5968a4daf-kube-api-access-p5dgh\") pod \"dnsmasq-dns-69c986f6d7-wzmzv\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.716073 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qprhz\" (UniqueName: \"kubernetes.io/projected/fab4c603-5b7c-4538-8630-64e6eabc1b9a-kube-api-access-qprhz\") pod \"ceilometer-0\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.761802 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-scripts\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.761858 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe39d406-02f1-4239-9877-17fe68de3d3b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.762084 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.797169 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.819006 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.857828 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.864750 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.872308 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9tzz\" (UniqueName: \"kubernetes.io/projected/ca7b1966-197e-4e08-a162-7a3dd7eab8ed-kube-api-access-w9tzz\") pod \"ca7b1966-197e-4e08-a162-7a3dd7eab8ed\" (UID: \"ca7b1966-197e-4e08-a162-7a3dd7eab8ed\") " Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.873072 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe39d406-02f1-4239-9877-17fe68de3d3b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.873174 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4prsl\" (UniqueName: \"kubernetes.io/projected/fe39d406-02f1-4239-9877-17fe68de3d3b-kube-api-access-4prsl\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.873357 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.873391 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.873422 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.873545 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-scripts\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.873590 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe39d406-02f1-4239-9877-17fe68de3d3b-logs\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.873755 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe39d406-02f1-4239-9877-17fe68de3d3b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.877080 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca7b1966-197e-4e08-a162-7a3dd7eab8ed-kube-api-access-w9tzz" (OuterVolumeSpecName: "kube-api-access-w9tzz") pod "ca7b1966-197e-4e08-a162-7a3dd7eab8ed" (UID: "ca7b1966-197e-4e08-a162-7a3dd7eab8ed"). InnerVolumeSpecName "kube-api-access-w9tzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.879132 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.879230 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-scripts\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.974703 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4prsl\" (UniqueName: \"kubernetes.io/projected/fe39d406-02f1-4239-9877-17fe68de3d3b-kube-api-access-4prsl\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.975088 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.975136 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.975219 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe39d406-02f1-4239-9877-17fe68de3d3b-logs\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.975270 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9tzz\" (UniqueName: \"kubernetes.io/projected/ca7b1966-197e-4e08-a162-7a3dd7eab8ed-kube-api-access-w9tzz\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.975701 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe39d406-02f1-4239-9877-17fe68de3d3b-logs\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.988807 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.988831 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:51 crc kubenswrapper[4725]: I1002 11:47:51.993948 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4prsl\" (UniqueName: \"kubernetes.io/projected/fe39d406-02f1-4239-9877-17fe68de3d3b-kube-api-access-4prsl\") pod \"cinder-api-0\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " pod="openstack/cinder-api-0" Oct 02 11:47:52 crc kubenswrapper[4725]: I1002 11:47:52.130528 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 11:47:52 crc kubenswrapper[4725]: I1002 11:47:52.172183 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b725j" event={"ID":"ca7b1966-197e-4e08-a162-7a3dd7eab8ed","Type":"ContainerDied","Data":"5e7f6bc1034d0dd4dbf299c6d84e8d7f37c08c068b7b104e4a9130562a03f963"} Oct 02 11:47:52 crc kubenswrapper[4725]: I1002 11:47:52.172233 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e7f6bc1034d0dd4dbf299c6d84e8d7f37c08c068b7b104e4a9130562a03f963" Oct 02 11:47:52 crc kubenswrapper[4725]: I1002 11:47:52.172329 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b725j" Oct 02 11:47:52 crc kubenswrapper[4725]: I1002 11:47:52.179107 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"138ed7a6-24d0-4071-b142-ece9a296eb65","Type":"ContainerStarted","Data":"9f19d82c06eb45e0438ec6ad1e1ac3e9329f0b872b6b5fdfe4fb0e344c2af9e3"} Oct 02 11:47:52 crc kubenswrapper[4725]: I1002 11:47:52.440491 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 11:47:52 crc kubenswrapper[4725]: I1002 11:47:52.610567 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wzmzv"] Oct 02 11:47:52 crc kubenswrapper[4725]: W1002 11:47:52.625365 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf37a2635_e342_497f_95af_54b5968a4daf.slice/crio-0ba73907bdf157ce9309af0d86a76a34db052d4765067faa5ddb78a8dd459142 WatchSource:0}: Error finding container 0ba73907bdf157ce9309af0d86a76a34db052d4765067faa5ddb78a8dd459142: Status 404 returned error can't find the container with id 0ba73907bdf157ce9309af0d86a76a34db052d4765067faa5ddb78a8dd459142 Oct 02 11:47:52 crc kubenswrapper[4725]: I1002 11:47:52.628122 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:52 crc kubenswrapper[4725]: I1002 11:47:52.738776 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 02 11:47:52 crc kubenswrapper[4725]: W1002 11:47:52.745090 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe39d406_02f1_4239_9877_17fe68de3d3b.slice/crio-f86e9d559bde240742975ff937ee6aaf2943be6d262f03a7300b8a77ca7ffb31 WatchSource:0}: Error finding container f86e9d559bde240742975ff937ee6aaf2943be6d262f03a7300b8a77ca7ffb31: Status 404 returned error can't find the container with id f86e9d559bde240742975ff937ee6aaf2943be6d262f03a7300b8a77ca7ffb31 Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.201369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe39d406-02f1-4239-9877-17fe68de3d3b","Type":"ContainerStarted","Data":"f86e9d559bde240742975ff937ee6aaf2943be6d262f03a7300b8a77ca7ffb31"} Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.209679 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"138ed7a6-24d0-4071-b142-ece9a296eb65","Type":"ContainerStarted","Data":"868655d13bea42e42311dab79ae84776bb74a737698b6eeb297621dea588c65d"} Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.212160 4725 generic.go:334] "Generic (PLEG): container finished" podID="f37a2635-e342-497f-95af-54b5968a4daf" containerID="57d31939b2556df3c7b8bbfd2d9100e5e343fce7391591a788abd796fa61cbda" exitCode=0 Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.212238 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" event={"ID":"f37a2635-e342-497f-95af-54b5968a4daf","Type":"ContainerDied","Data":"57d31939b2556df3c7b8bbfd2d9100e5e343fce7391591a788abd796fa61cbda"} Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.212266 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" event={"ID":"f37a2635-e342-497f-95af-54b5968a4daf","Type":"ContainerStarted","Data":"0ba73907bdf157ce9309af0d86a76a34db052d4765067faa5ddb78a8dd459142"} Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.218275 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"55c8573f-3cb6-4d8c-8b84-dfa5f6221f42","Type":"ContainerStarted","Data":"d5f729e4d1337db27b59d464c5d60451639f70dd6cd8b2356b1ca8c5b602c821"} Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.220011 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fab4c603-5b7c-4538-8630-64e6eabc1b9a","Type":"ContainerStarted","Data":"79eff833c374aac2d1682e38e55f349076dddb0b98c5d1cc954cd5801c4a407b"} Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.221498 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec237bcc-311d-4f76-9a83-ae08624ed97a","Type":"ContainerStarted","Data":"834c0a9119a8c7c8c253761b9bb670772f303320803f31e1e451905b8083c8bb"} Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.234013 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.233993173 podStartE2EDuration="4.233993173s" podCreationTimestamp="2025-10-02 11:47:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:47:53.229388801 +0000 UTC m=+1193.136888284" watchObservedRunningTime="2025-10-02 11:47:53.233993173 +0000 UTC m=+1193.141492636" Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.287071 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5327ca9d-2534-452b-8d42-266f5244c6a6" path="/var/lib/kubelet/pods/5327ca9d-2534-452b-8d42-266f5244c6a6/volumes" Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.590931 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.590913463 podStartE2EDuration="5.590913463s" podCreationTimestamp="2025-10-02 11:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:47:53.288406078 +0000 UTC m=+1193.195905541" watchObservedRunningTime="2025-10-02 11:47:53.590913463 +0000 UTC m=+1193.498412946" Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.592388 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a442-account-create-zrmtv"] Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.596059 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a442-account-create-zrmtv" Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.598107 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.613405 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a442-account-create-zrmtv"] Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.720138 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnw2n\" (UniqueName: \"kubernetes.io/projected/9514d7f5-f9ab-4f77-9bea-5952912df791-kube-api-access-mnw2n\") pod \"nova-api-a442-account-create-zrmtv\" (UID: \"9514d7f5-f9ab-4f77-9bea-5952912df791\") " pod="openstack/nova-api-a442-account-create-zrmtv" Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.821653 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnw2n\" (UniqueName: \"kubernetes.io/projected/9514d7f5-f9ab-4f77-9bea-5952912df791-kube-api-access-mnw2n\") pod \"nova-api-a442-account-create-zrmtv\" (UID: \"9514d7f5-f9ab-4f77-9bea-5952912df791\") " pod="openstack/nova-api-a442-account-create-zrmtv" Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.840216 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnw2n\" (UniqueName: \"kubernetes.io/projected/9514d7f5-f9ab-4f77-9bea-5952912df791-kube-api-access-mnw2n\") pod \"nova-api-a442-account-create-zrmtv\" (UID: \"9514d7f5-f9ab-4f77-9bea-5952912df791\") " pod="openstack/nova-api-a442-account-create-zrmtv" Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.954837 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 02 11:47:53 crc kubenswrapper[4725]: I1002 11:47:53.973556 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a442-account-create-zrmtv" Oct 02 11:47:54 crc kubenswrapper[4725]: I1002 11:47:54.277430 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" event={"ID":"f37a2635-e342-497f-95af-54b5968a4daf","Type":"ContainerStarted","Data":"f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e"} Oct 02 11:47:54 crc kubenswrapper[4725]: I1002 11:47:54.277959 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:54 crc kubenswrapper[4725]: I1002 11:47:54.285789 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fab4c603-5b7c-4538-8630-64e6eabc1b9a","Type":"ContainerStarted","Data":"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b"} Oct 02 11:47:54 crc kubenswrapper[4725]: I1002 11:47:54.326205 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" podStartSLOduration=3.32618049 podStartE2EDuration="3.32618049s" podCreationTimestamp="2025-10-02 11:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:47:54.310343422 +0000 UTC m=+1194.217842905" watchObservedRunningTime="2025-10-02 11:47:54.32618049 +0000 UTC m=+1194.233679953" Oct 02 11:47:54 crc kubenswrapper[4725]: I1002 11:47:54.335421 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe39d406-02f1-4239-9877-17fe68de3d3b","Type":"ContainerStarted","Data":"86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af"} Oct 02 11:47:54 crc kubenswrapper[4725]: I1002 11:47:54.629685 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a442-account-create-zrmtv"] Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.354666 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fab4c603-5b7c-4538-8630-64e6eabc1b9a","Type":"ContainerStarted","Data":"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4"} Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.357327 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec237bcc-311d-4f76-9a83-ae08624ed97a","Type":"ContainerStarted","Data":"6f3bbb170521447430a26ecf8a0961a4fbbc5af7b255d197606e8cb65ea15fe9"} Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.357358 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec237bcc-311d-4f76-9a83-ae08624ed97a","Type":"ContainerStarted","Data":"a75ccedad18828ea9eb33a380074c708297e79c11abf35dc6a6d7199ca9c3346"} Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.359990 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe39d406-02f1-4239-9877-17fe68de3d3b","Type":"ContainerStarted","Data":"f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746"} Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.360128 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fe39d406-02f1-4239-9877-17fe68de3d3b" containerName="cinder-api-log" containerID="cri-o://86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af" gracePeriod=30 Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.360314 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.360356 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fe39d406-02f1-4239-9877-17fe68de3d3b" containerName="cinder-api" containerID="cri-o://f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746" gracePeriod=30 Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.368741 4725 generic.go:334] "Generic (PLEG): container finished" podID="e3e740ae-3b77-4497-b730-6cbd4f960d84" containerID="ecca87986d1d6c49a7d5f56ca6d9c88ab4042ea69ae8bf49fdaeb8ec9a240921" exitCode=0 Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.368822 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7jgcd" event={"ID":"e3e740ae-3b77-4497-b730-6cbd4f960d84","Type":"ContainerDied","Data":"ecca87986d1d6c49a7d5f56ca6d9c88ab4042ea69ae8bf49fdaeb8ec9a240921"} Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.371481 4725 generic.go:334] "Generic (PLEG): container finished" podID="9514d7f5-f9ab-4f77-9bea-5952912df791" containerID="d88388a4db264ce81276415aad8169cf189633a0b5ec52aaeed3c9be83072a71" exitCode=0 Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.372580 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a442-account-create-zrmtv" event={"ID":"9514d7f5-f9ab-4f77-9bea-5952912df791","Type":"ContainerDied","Data":"d88388a4db264ce81276415aad8169cf189633a0b5ec52aaeed3c9be83072a71"} Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.372610 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a442-account-create-zrmtv" event={"ID":"9514d7f5-f9ab-4f77-9bea-5952912df791","Type":"ContainerStarted","Data":"92e06e4d0b1453eee513c50992ebe8602e13cb4f9ee65ba15ce704a8d9dcfd90"} Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.387331 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.3227499 podStartE2EDuration="4.387312268s" podCreationTimestamp="2025-10-02 11:47:51 +0000 UTC" firstStartedPulling="2025-10-02 11:47:52.476709085 +0000 UTC m=+1192.384208548" lastFinishedPulling="2025-10-02 11:47:53.541271453 +0000 UTC m=+1193.448770916" observedRunningTime="2025-10-02 11:47:55.383806246 +0000 UTC m=+1195.291305709" watchObservedRunningTime="2025-10-02 11:47:55.387312268 +0000 UTC m=+1195.294811731" Oct 02 11:47:55 crc kubenswrapper[4725]: I1002 11:47:55.425386 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.425371193 podStartE2EDuration="4.425371193s" podCreationTimestamp="2025-10-02 11:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:47:55.421683015 +0000 UTC m=+1195.329182478" watchObservedRunningTime="2025-10-02 11:47:55.425371193 +0000 UTC m=+1195.332870656" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.235322 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.291691 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-scripts\") pod \"fe39d406-02f1-4239-9877-17fe68de3d3b\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.291937 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-combined-ca-bundle\") pod \"fe39d406-02f1-4239-9877-17fe68de3d3b\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.292017 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4prsl\" (UniqueName: \"kubernetes.io/projected/fe39d406-02f1-4239-9877-17fe68de3d3b-kube-api-access-4prsl\") pod \"fe39d406-02f1-4239-9877-17fe68de3d3b\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.292127 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data-custom\") pod \"fe39d406-02f1-4239-9877-17fe68de3d3b\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.292161 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data\") pod \"fe39d406-02f1-4239-9877-17fe68de3d3b\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.292197 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe39d406-02f1-4239-9877-17fe68de3d3b-etc-machine-id\") pod \"fe39d406-02f1-4239-9877-17fe68de3d3b\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.292324 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe39d406-02f1-4239-9877-17fe68de3d3b-logs\") pod \"fe39d406-02f1-4239-9877-17fe68de3d3b\" (UID: \"fe39d406-02f1-4239-9877-17fe68de3d3b\") " Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.292820 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe39d406-02f1-4239-9877-17fe68de3d3b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fe39d406-02f1-4239-9877-17fe68de3d3b" (UID: "fe39d406-02f1-4239-9877-17fe68de3d3b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.292979 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe39d406-02f1-4239-9877-17fe68de3d3b-logs" (OuterVolumeSpecName: "logs") pod "fe39d406-02f1-4239-9877-17fe68de3d3b" (UID: "fe39d406-02f1-4239-9877-17fe68de3d3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.299885 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe39d406-02f1-4239-9877-17fe68de3d3b-kube-api-access-4prsl" (OuterVolumeSpecName: "kube-api-access-4prsl") pod "fe39d406-02f1-4239-9877-17fe68de3d3b" (UID: "fe39d406-02f1-4239-9877-17fe68de3d3b"). InnerVolumeSpecName "kube-api-access-4prsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.300104 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fe39d406-02f1-4239-9877-17fe68de3d3b" (UID: "fe39d406-02f1-4239-9877-17fe68de3d3b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.303892 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-scripts" (OuterVolumeSpecName: "scripts") pod "fe39d406-02f1-4239-9877-17fe68de3d3b" (UID: "fe39d406-02f1-4239-9877-17fe68de3d3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.333823 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe39d406-02f1-4239-9877-17fe68de3d3b" (UID: "fe39d406-02f1-4239-9877-17fe68de3d3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.373274 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data" (OuterVolumeSpecName: "config-data") pod "fe39d406-02f1-4239-9877-17fe68de3d3b" (UID: "fe39d406-02f1-4239-9877-17fe68de3d3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.383325 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fab4c603-5b7c-4538-8630-64e6eabc1b9a","Type":"ContainerStarted","Data":"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925"} Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.386914 4725 generic.go:334] "Generic (PLEG): container finished" podID="fe39d406-02f1-4239-9877-17fe68de3d3b" containerID="f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746" exitCode=0 Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.386953 4725 generic.go:334] "Generic (PLEG): container finished" podID="fe39d406-02f1-4239-9877-17fe68de3d3b" containerID="86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af" exitCode=143 Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.387062 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe39d406-02f1-4239-9877-17fe68de3d3b","Type":"ContainerDied","Data":"f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746"} Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.387107 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe39d406-02f1-4239-9877-17fe68de3d3b","Type":"ContainerDied","Data":"86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af"} Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.387120 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe39d406-02f1-4239-9877-17fe68de3d3b","Type":"ContainerDied","Data":"f86e9d559bde240742975ff937ee6aaf2943be6d262f03a7300b8a77ca7ffb31"} Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.387137 4725 scope.go:117] "RemoveContainer" containerID="f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.387513 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.394494 4725 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.394523 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.394534 4725 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe39d406-02f1-4239-9877-17fe68de3d3b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.394543 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe39d406-02f1-4239-9877-17fe68de3d3b-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.394551 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.394582 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe39d406-02f1-4239-9877-17fe68de3d3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.394590 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4prsl\" (UniqueName: \"kubernetes.io/projected/fe39d406-02f1-4239-9877-17fe68de3d3b-kube-api-access-4prsl\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.457225 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.462620 4725 scope.go:117] "RemoveContainer" containerID="86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.463955 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.480742 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 02 11:47:56 crc kubenswrapper[4725]: E1002 11:47:56.481135 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe39d406-02f1-4239-9877-17fe68de3d3b" containerName="cinder-api" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.481151 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe39d406-02f1-4239-9877-17fe68de3d3b" containerName="cinder-api" Oct 02 11:47:56 crc kubenswrapper[4725]: E1002 11:47:56.481165 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe39d406-02f1-4239-9877-17fe68de3d3b" containerName="cinder-api-log" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.481171 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe39d406-02f1-4239-9877-17fe68de3d3b" containerName="cinder-api-log" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.481342 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe39d406-02f1-4239-9877-17fe68de3d3b" containerName="cinder-api" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.481360 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe39d406-02f1-4239-9877-17fe68de3d3b" containerName="cinder-api-log" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.493206 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.498148 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.498306 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.501887 4725 scope.go:117] "RemoveContainer" containerID="f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.502238 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.504172 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 02 11:47:56 crc kubenswrapper[4725]: E1002 11:47:56.512906 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746\": container with ID starting with f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746 not found: ID does not exist" containerID="f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.512958 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746"} err="failed to get container status \"f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746\": rpc error: code = NotFound desc = could not find container \"f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746\": container with ID starting with f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746 not found: ID does not exist" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.513001 4725 scope.go:117] "RemoveContainer" containerID="86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af" Oct 02 11:47:56 crc kubenswrapper[4725]: E1002 11:47:56.515764 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af\": container with ID starting with 86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af not found: ID does not exist" containerID="86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.515802 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af"} err="failed to get container status \"86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af\": rpc error: code = NotFound desc = could not find container \"86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af\": container with ID starting with 86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af not found: ID does not exist" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.515830 4725 scope.go:117] "RemoveContainer" containerID="f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.521353 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746"} err="failed to get container status \"f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746\": rpc error: code = NotFound desc = could not find container \"f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746\": container with ID starting with f0125e9162332d0fee2eb8648bc5887985d72493afb5d6d8449c57b9443d0746 not found: ID does not exist" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.521479 4725 scope.go:117] "RemoveContainer" containerID="86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.523673 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af"} err="failed to get container status \"86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af\": rpc error: code = NotFound desc = could not find container \"86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af\": container with ID starting with 86e61d4006de04962d706c8d0f01e003c57cd2a0b4c7bc8fbb3638a0921373af not found: ID does not exist" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.597418 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ced9aab-c4e7-4463-9d29-d32521d07220-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.597456 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.597496 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.597515 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-config-data-custom\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.597530 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-scripts\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.597551 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.597578 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pzzm\" (UniqueName: \"kubernetes.io/projected/9ced9aab-c4e7-4463-9d29-d32521d07220-kube-api-access-9pzzm\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.597599 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ced9aab-c4e7-4463-9d29-d32521d07220-logs\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.597622 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-config-data\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.699336 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ced9aab-c4e7-4463-9d29-d32521d07220-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.699379 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.699415 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.699433 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-config-data-custom\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.699450 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-scripts\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.699469 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.699495 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pzzm\" (UniqueName: \"kubernetes.io/projected/9ced9aab-c4e7-4463-9d29-d32521d07220-kube-api-access-9pzzm\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.699518 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ced9aab-c4e7-4463-9d29-d32521d07220-logs\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.699543 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-config-data\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.706169 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.706351 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ced9aab-c4e7-4463-9d29-d32521d07220-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.706799 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ced9aab-c4e7-4463-9d29-d32521d07220-logs\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.707040 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-scripts\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.707293 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-config-data-custom\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.710557 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.710608 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-config-data\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.716597 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ced9aab-c4e7-4463-9d29-d32521d07220-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.720891 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pzzm\" (UniqueName: \"kubernetes.io/projected/9ced9aab-c4e7-4463-9d29-d32521d07220-kube-api-access-9pzzm\") pod \"cinder-api-0\" (UID: \"9ced9aab-c4e7-4463-9d29-d32521d07220\") " pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.754510 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.784257 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a442-account-create-zrmtv" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.822453 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.836379 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.887914 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.913337 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnw2n\" (UniqueName: \"kubernetes.io/projected/9514d7f5-f9ab-4f77-9bea-5952912df791-kube-api-access-mnw2n\") pod \"9514d7f5-f9ab-4f77-9bea-5952912df791\" (UID: \"9514d7f5-f9ab-4f77-9bea-5952912df791\") " Oct 02 11:47:56 crc kubenswrapper[4725]: I1002 11:47:56.931877 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9514d7f5-f9ab-4f77-9bea-5952912df791-kube-api-access-mnw2n" (OuterVolumeSpecName: "kube-api-access-mnw2n") pod "9514d7f5-f9ab-4f77-9bea-5952912df791" (UID: "9514d7f5-f9ab-4f77-9bea-5952912df791"). InnerVolumeSpecName "kube-api-access-mnw2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.015042 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-config\") pod \"e3e740ae-3b77-4497-b730-6cbd4f960d84\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.015500 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-combined-ca-bundle\") pod \"e3e740ae-3b77-4497-b730-6cbd4f960d84\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.015536 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv5t2\" (UniqueName: \"kubernetes.io/projected/e3e740ae-3b77-4497-b730-6cbd4f960d84-kube-api-access-jv5t2\") pod \"e3e740ae-3b77-4497-b730-6cbd4f960d84\" (UID: \"e3e740ae-3b77-4497-b730-6cbd4f960d84\") " Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.016937 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnw2n\" (UniqueName: \"kubernetes.io/projected/9514d7f5-f9ab-4f77-9bea-5952912df791-kube-api-access-mnw2n\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.021627 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e740ae-3b77-4497-b730-6cbd4f960d84-kube-api-access-jv5t2" (OuterVolumeSpecName: "kube-api-access-jv5t2") pod "e3e740ae-3b77-4497-b730-6cbd4f960d84" (UID: "e3e740ae-3b77-4497-b730-6cbd4f960d84"). InnerVolumeSpecName "kube-api-access-jv5t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.058168 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3e740ae-3b77-4497-b730-6cbd4f960d84" (UID: "e3e740ae-3b77-4497-b730-6cbd4f960d84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.064845 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-config" (OuterVolumeSpecName: "config") pod "e3e740ae-3b77-4497-b730-6cbd4f960d84" (UID: "e3e740ae-3b77-4497-b730-6cbd4f960d84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.118570 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.118600 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv5t2\" (UniqueName: \"kubernetes.io/projected/e3e740ae-3b77-4497-b730-6cbd4f960d84-kube-api-access-jv5t2\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.118613 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e740ae-3b77-4497-b730-6cbd4f960d84-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.278850 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe39d406-02f1-4239-9877-17fe68de3d3b" path="/var/lib/kubelet/pods/fe39d406-02f1-4239-9877-17fe68de3d3b/volumes" Oct 02 11:47:57 crc kubenswrapper[4725]: W1002 11:47:57.335476 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ced9aab_c4e7_4463_9d29_d32521d07220.slice/crio-51e2805a653d636874b28a54e61dd483969beb004fae105ab677b91c0389f45b WatchSource:0}: Error finding container 51e2805a653d636874b28a54e61dd483969beb004fae105ab677b91c0389f45b: Status 404 returned error can't find the container with id 51e2805a653d636874b28a54e61dd483969beb004fae105ab677b91c0389f45b Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.337706 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.399167 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fab4c603-5b7c-4538-8630-64e6eabc1b9a","Type":"ContainerStarted","Data":"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df"} Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.399299 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="proxy-httpd" containerID="cri-o://ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df" gracePeriod=30 Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.399328 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="sg-core" containerID="cri-o://6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925" gracePeriod=30 Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.399295 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="ceilometer-central-agent" containerID="cri-o://01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b" gracePeriod=30 Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.399399 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="ceilometer-notification-agent" containerID="cri-o://086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4" gracePeriod=30 Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.399326 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.412975 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7jgcd" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.413004 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7jgcd" event={"ID":"e3e740ae-3b77-4497-b730-6cbd4f960d84","Type":"ContainerDied","Data":"0f8ecdeeab2fcc5edd7387821a529e45e19e8c55072730c518e342eea03ef91f"} Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.413036 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f8ecdeeab2fcc5edd7387821a529e45e19e8c55072730c518e342eea03ef91f" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.415156 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9ced9aab-c4e7-4463-9d29-d32521d07220","Type":"ContainerStarted","Data":"51e2805a653d636874b28a54e61dd483969beb004fae105ab677b91c0389f45b"} Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.419918 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a442-account-create-zrmtv" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.420009 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a442-account-create-zrmtv" event={"ID":"9514d7f5-f9ab-4f77-9bea-5952912df791","Type":"ContainerDied","Data":"92e06e4d0b1453eee513c50992ebe8602e13cb4f9ee65ba15ce704a8d9dcfd90"} Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.420037 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92e06e4d0b1453eee513c50992ebe8602e13cb4f9ee65ba15ce704a8d9dcfd90" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.431195 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.121986577 podStartE2EDuration="6.431179595s" podCreationTimestamp="2025-10-02 11:47:51 +0000 UTC" firstStartedPulling="2025-10-02 11:47:52.630860543 +0000 UTC m=+1192.538359996" lastFinishedPulling="2025-10-02 11:47:56.940053551 +0000 UTC m=+1196.847553014" observedRunningTime="2025-10-02 11:47:57.427025165 +0000 UTC m=+1197.334524628" watchObservedRunningTime="2025-10-02 11:47:57.431179595 +0000 UTC m=+1197.338679058" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.722646 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wzmzv"] Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.722929 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" podUID="f37a2635-e342-497f-95af-54b5968a4daf" containerName="dnsmasq-dns" containerID="cri-o://f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e" gracePeriod=10 Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.788972 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d86865b78-gxbbf"] Oct 02 11:47:57 crc kubenswrapper[4725]: E1002 11:47:57.789391 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9514d7f5-f9ab-4f77-9bea-5952912df791" containerName="mariadb-account-create" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.789409 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9514d7f5-f9ab-4f77-9bea-5952912df791" containerName="mariadb-account-create" Oct 02 11:47:57 crc kubenswrapper[4725]: E1002 11:47:57.789424 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e740ae-3b77-4497-b730-6cbd4f960d84" containerName="neutron-db-sync" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.789430 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e740ae-3b77-4497-b730-6cbd4f960d84" containerName="neutron-db-sync" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.789616 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9514d7f5-f9ab-4f77-9bea-5952912df791" containerName="mariadb-account-create" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.789643 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e740ae-3b77-4497-b730-6cbd4f960d84" containerName="neutron-db-sync" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.790871 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.794620 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bvzll"] Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.796262 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.799179 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.799375 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gwhvn" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.799483 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.799738 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.819649 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d86865b78-gxbbf"] Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.831152 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-httpd-config\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.831213 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-config\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.831246 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-combined-ca-bundle\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.831306 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv556\" (UniqueName: \"kubernetes.io/projected/d529b1cf-6961-45e2-984d-640a7321da54-kube-api-access-xv556\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.831322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-ovndb-tls-certs\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.847826 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bvzll"] Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.933075 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-config\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.933467 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-ovndb-tls-certs\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.933495 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv556\" (UniqueName: \"kubernetes.io/projected/d529b1cf-6961-45e2-984d-640a7321da54-kube-api-access-xv556\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.933533 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-svc\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.933615 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-httpd-config\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.933668 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztgq8\" (UniqueName: \"kubernetes.io/projected/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-kube-api-access-ztgq8\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.933697 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-config\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.933761 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-combined-ca-bundle\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.933852 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.933901 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.933954 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.940005 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-httpd-config\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.941176 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-combined-ca-bundle\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.941228 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-ovndb-tls-certs\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.947131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-config\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:57 crc kubenswrapper[4725]: I1002 11:47:57.953768 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv556\" (UniqueName: \"kubernetes.io/projected/d529b1cf-6961-45e2-984d-640a7321da54-kube-api-access-xv556\") pod \"neutron-d86865b78-gxbbf\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.035131 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.035167 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.035194 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.035228 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-config\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.035270 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-svc\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.035353 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztgq8\" (UniqueName: \"kubernetes.io/projected/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-kube-api-access-ztgq8\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.036213 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.036931 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-config\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.037696 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.038220 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.038586 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-svc\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.058140 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztgq8\" (UniqueName: \"kubernetes.io/projected/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-kube-api-access-ztgq8\") pod \"dnsmasq-dns-5784cf869f-bvzll\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.130913 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.155768 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.389872 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.398238 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.444855 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-config-data\") pod \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.444911 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-run-httpd\") pod \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.444964 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-svc\") pod \"f37a2635-e342-497f-95af-54b5968a4daf\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.445003 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-combined-ca-bundle\") pod \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.445037 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-sb\") pod \"f37a2635-e342-497f-95af-54b5968a4daf\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.445077 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-nb\") pod \"f37a2635-e342-497f-95af-54b5968a4daf\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.445103 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-config\") pod \"f37a2635-e342-497f-95af-54b5968a4daf\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.445221 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-scripts\") pod \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.445245 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5dgh\" (UniqueName: \"kubernetes.io/projected/f37a2635-e342-497f-95af-54b5968a4daf-kube-api-access-p5dgh\") pod \"f37a2635-e342-497f-95af-54b5968a4daf\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.445280 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-sg-core-conf-yaml\") pod \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.445319 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qprhz\" (UniqueName: \"kubernetes.io/projected/fab4c603-5b7c-4538-8630-64e6eabc1b9a-kube-api-access-qprhz\") pod \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.445352 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-swift-storage-0\") pod \"f37a2635-e342-497f-95af-54b5968a4daf\" (UID: \"f37a2635-e342-497f-95af-54b5968a4daf\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.445372 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-log-httpd\") pod \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\" (UID: \"fab4c603-5b7c-4538-8630-64e6eabc1b9a\") " Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.446503 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fab4c603-5b7c-4538-8630-64e6eabc1b9a" (UID: "fab4c603-5b7c-4538-8630-64e6eabc1b9a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.457911 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fab4c603-5b7c-4538-8630-64e6eabc1b9a" (UID: "fab4c603-5b7c-4538-8630-64e6eabc1b9a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.486155 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-scripts" (OuterVolumeSpecName: "scripts") pod "fab4c603-5b7c-4538-8630-64e6eabc1b9a" (UID: "fab4c603-5b7c-4538-8630-64e6eabc1b9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.486533 4725 generic.go:334] "Generic (PLEG): container finished" podID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerID="ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df" exitCode=0 Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.486560 4725 generic.go:334] "Generic (PLEG): container finished" podID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerID="6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925" exitCode=2 Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.486571 4725 generic.go:334] "Generic (PLEG): container finished" podID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerID="086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4" exitCode=0 Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.486580 4725 generic.go:334] "Generic (PLEG): container finished" podID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerID="01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b" exitCode=0 Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.486628 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fab4c603-5b7c-4538-8630-64e6eabc1b9a","Type":"ContainerDied","Data":"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df"} Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.486659 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fab4c603-5b7c-4538-8630-64e6eabc1b9a","Type":"ContainerDied","Data":"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925"} Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.486671 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fab4c603-5b7c-4538-8630-64e6eabc1b9a","Type":"ContainerDied","Data":"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4"} Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.486682 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fab4c603-5b7c-4538-8630-64e6eabc1b9a","Type":"ContainerDied","Data":"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b"} Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.486693 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fab4c603-5b7c-4538-8630-64e6eabc1b9a","Type":"ContainerDied","Data":"79eff833c374aac2d1682e38e55f349076dddb0b98c5d1cc954cd5801c4a407b"} Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.486715 4725 scope.go:117] "RemoveContainer" containerID="ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.487061 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.490609 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37a2635-e342-497f-95af-54b5968a4daf-kube-api-access-p5dgh" (OuterVolumeSpecName: "kube-api-access-p5dgh") pod "f37a2635-e342-497f-95af-54b5968a4daf" (UID: "f37a2635-e342-497f-95af-54b5968a4daf"). InnerVolumeSpecName "kube-api-access-p5dgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.493868 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab4c603-5b7c-4538-8630-64e6eabc1b9a-kube-api-access-qprhz" (OuterVolumeSpecName: "kube-api-access-qprhz") pod "fab4c603-5b7c-4538-8630-64e6eabc1b9a" (UID: "fab4c603-5b7c-4538-8630-64e6eabc1b9a"). InnerVolumeSpecName "kube-api-access-qprhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.496803 4725 generic.go:334] "Generic (PLEG): container finished" podID="f37a2635-e342-497f-95af-54b5968a4daf" containerID="f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e" exitCode=0 Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.496911 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" event={"ID":"f37a2635-e342-497f-95af-54b5968a4daf","Type":"ContainerDied","Data":"f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e"} Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.496960 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" event={"ID":"f37a2635-e342-497f-95af-54b5968a4daf","Type":"ContainerDied","Data":"0ba73907bdf157ce9309af0d86a76a34db052d4765067faa5ddb78a8dd459142"} Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.497018 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wzmzv" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.505963 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fab4c603-5b7c-4538-8630-64e6eabc1b9a" (UID: "fab4c603-5b7c-4538-8630-64e6eabc1b9a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.506233 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9ced9aab-c4e7-4463-9d29-d32521d07220","Type":"ContainerStarted","Data":"9e110c0382442bafff4f175c122fb0298581717c853df940a15cb8804cf1327e"} Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.532211 4725 scope.go:117] "RemoveContainer" containerID="6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.545471 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f37a2635-e342-497f-95af-54b5968a4daf" (UID: "f37a2635-e342-497f-95af-54b5968a4daf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.550059 4725 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.550089 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qprhz\" (UniqueName: \"kubernetes.io/projected/fab4c603-5b7c-4538-8630-64e6eabc1b9a-kube-api-access-qprhz\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.550103 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.550114 4725 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.550125 4725 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fab4c603-5b7c-4538-8630-64e6eabc1b9a-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.550135 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.550145 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5dgh\" (UniqueName: \"kubernetes.io/projected/f37a2635-e342-497f-95af-54b5968a4daf-kube-api-access-p5dgh\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.571487 4725 scope.go:117] "RemoveContainer" containerID="086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.595024 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fab4c603-5b7c-4538-8630-64e6eabc1b9a" (UID: "fab4c603-5b7c-4538-8630-64e6eabc1b9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.602410 4725 scope.go:117] "RemoveContainer" containerID="01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.640214 4725 scope.go:117] "RemoveContainer" containerID="ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df" Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.641416 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df\": container with ID starting with ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df not found: ID does not exist" containerID="ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.641464 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df"} err="failed to get container status \"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df\": rpc error: code = NotFound desc = could not find container \"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df\": container with ID starting with ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.641483 4725 scope.go:117] "RemoveContainer" containerID="6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925" Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.648308 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925\": container with ID starting with 6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925 not found: ID does not exist" containerID="6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.648362 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925"} err="failed to get container status \"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925\": rpc error: code = NotFound desc = could not find container \"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925\": container with ID starting with 6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925 not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.648391 4725 scope.go:117] "RemoveContainer" containerID="086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4" Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.648988 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4\": container with ID starting with 086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4 not found: ID does not exist" containerID="086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.649039 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4"} err="failed to get container status \"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4\": rpc error: code = NotFound desc = could not find container \"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4\": container with ID starting with 086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4 not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.649068 4725 scope.go:117] "RemoveContainer" containerID="01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.649182 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f37a2635-e342-497f-95af-54b5968a4daf" (UID: "f37a2635-e342-497f-95af-54b5968a4daf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.649349 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-config" (OuterVolumeSpecName: "config") pod "f37a2635-e342-497f-95af-54b5968a4daf" (UID: "f37a2635-e342-497f-95af-54b5968a4daf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.650931 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b\": container with ID starting with 01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b not found: ID does not exist" containerID="01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.650959 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b"} err="failed to get container status \"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b\": rpc error: code = NotFound desc = could not find container \"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b\": container with ID starting with 01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.650975 4725 scope.go:117] "RemoveContainer" containerID="ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.651329 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.651351 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.651361 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.651592 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f37a2635-e342-497f-95af-54b5968a4daf" (UID: "f37a2635-e342-497f-95af-54b5968a4daf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.653680 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df"} err="failed to get container status \"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df\": rpc error: code = NotFound desc = could not find container \"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df\": container with ID starting with ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.653707 4725 scope.go:117] "RemoveContainer" containerID="6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.654163 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925"} err="failed to get container status \"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925\": rpc error: code = NotFound desc = could not find container \"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925\": container with ID starting with 6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925 not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.654185 4725 scope.go:117] "RemoveContainer" containerID="086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.655516 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4"} err="failed to get container status \"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4\": rpc error: code = NotFound desc = could not find container \"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4\": container with ID starting with 086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4 not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.655544 4725 scope.go:117] "RemoveContainer" containerID="01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.657411 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b"} err="failed to get container status \"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b\": rpc error: code = NotFound desc = could not find container \"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b\": container with ID starting with 01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.657429 4725 scope.go:117] "RemoveContainer" containerID="ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.658183 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df"} err="failed to get container status \"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df\": rpc error: code = NotFound desc = could not find container \"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df\": container with ID starting with ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.658197 4725 scope.go:117] "RemoveContainer" containerID="6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.660083 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925"} err="failed to get container status \"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925\": rpc error: code = NotFound desc = could not find container \"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925\": container with ID starting with 6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925 not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.660108 4725 scope.go:117] "RemoveContainer" containerID="086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.660249 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f37a2635-e342-497f-95af-54b5968a4daf" (UID: "f37a2635-e342-497f-95af-54b5968a4daf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.660877 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4"} err="failed to get container status \"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4\": rpc error: code = NotFound desc = could not find container \"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4\": container with ID starting with 086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4 not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.660899 4725 scope.go:117] "RemoveContainer" containerID="01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.661259 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b"} err="failed to get container status \"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b\": rpc error: code = NotFound desc = could not find container \"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b\": container with ID starting with 01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.661275 4725 scope.go:117] "RemoveContainer" containerID="ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.661685 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df"} err="failed to get container status \"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df\": rpc error: code = NotFound desc = could not find container \"ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df\": container with ID starting with ee3ff611b890fc5b37e7158b8b75c0a5a0490cfb29a601ad2d0d99a4544a58df not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.661704 4725 scope.go:117] "RemoveContainer" containerID="6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.662516 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925"} err="failed to get container status \"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925\": rpc error: code = NotFound desc = could not find container \"6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925\": container with ID starting with 6a313a94d57cdfee8d1a4c6e24df227f0391764a0117af2580d416d2c58a7925 not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.662538 4725 scope.go:117] "RemoveContainer" containerID="086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.663091 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4"} err="failed to get container status \"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4\": rpc error: code = NotFound desc = could not find container \"086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4\": container with ID starting with 086a6427623bcc0f1022c3e7eed5e5b876583849de85d37cc59738520fb746b4 not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.663111 4725 scope.go:117] "RemoveContainer" containerID="01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.663349 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b"} err="failed to get container status \"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b\": rpc error: code = NotFound desc = could not find container \"01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b\": container with ID starting with 01d671d2d9b7577b95e20a9864eae04f1c02768af38f8570ff7af96c28faec0b not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.663371 4725 scope.go:117] "RemoveContainer" containerID="f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.698632 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-config-data" (OuterVolumeSpecName: "config-data") pod "fab4c603-5b7c-4538-8630-64e6eabc1b9a" (UID: "fab4c603-5b7c-4538-8630-64e6eabc1b9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.728051 4725 scope.go:117] "RemoveContainer" containerID="57d31939b2556df3c7b8bbfd2d9100e5e343fce7391591a788abd796fa61cbda" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.760197 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab4c603-5b7c-4538-8630-64e6eabc1b9a-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.760273 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.760287 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f37a2635-e342-497f-95af-54b5968a4daf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.770834 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bvzll"] Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.822310 4725 scope.go:117] "RemoveContainer" containerID="f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e" Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.823842 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e\": container with ID starting with f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e not found: ID does not exist" containerID="f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.823878 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e"} err="failed to get container status \"f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e\": rpc error: code = NotFound desc = could not find container \"f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e\": container with ID starting with f86747f46c157b36f59d950904b514b8ad9a196253631e220707a33b7fffad1e not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.823924 4725 scope.go:117] "RemoveContainer" containerID="57d31939b2556df3c7b8bbfd2d9100e5e343fce7391591a788abd796fa61cbda" Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.824544 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d31939b2556df3c7b8bbfd2d9100e5e343fce7391591a788abd796fa61cbda\": container with ID starting with 57d31939b2556df3c7b8bbfd2d9100e5e343fce7391591a788abd796fa61cbda not found: ID does not exist" containerID="57d31939b2556df3c7b8bbfd2d9100e5e343fce7391591a788abd796fa61cbda" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.824619 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d31939b2556df3c7b8bbfd2d9100e5e343fce7391591a788abd796fa61cbda"} err="failed to get container status \"57d31939b2556df3c7b8bbfd2d9100e5e343fce7391591a788abd796fa61cbda\": rpc error: code = NotFound desc = could not find container \"57d31939b2556df3c7b8bbfd2d9100e5e343fce7391591a788abd796fa61cbda\": container with ID starting with 57d31939b2556df3c7b8bbfd2d9100e5e343fce7391591a788abd796fa61cbda not found: ID does not exist" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.845082 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d86865b78-gxbbf"] Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.886509 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wzmzv"] Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.897892 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wzmzv"] Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.908408 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.919922 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.927774 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.928244 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37a2635-e342-497f-95af-54b5968a4daf" containerName="init" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.928264 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37a2635-e342-497f-95af-54b5968a4daf" containerName="init" Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.928282 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="proxy-httpd" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.928289 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="proxy-httpd" Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.928323 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="sg-core" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.928331 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="sg-core" Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.928342 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37a2635-e342-497f-95af-54b5968a4daf" containerName="dnsmasq-dns" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.928350 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37a2635-e342-497f-95af-54b5968a4daf" containerName="dnsmasq-dns" Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.928371 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="ceilometer-notification-agent" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.928379 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="ceilometer-notification-agent" Oct 02 11:47:58 crc kubenswrapper[4725]: E1002 11:47:58.928397 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="ceilometer-central-agent" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.928404 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="ceilometer-central-agent" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.929625 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="sg-core" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.929650 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37a2635-e342-497f-95af-54b5968a4daf" containerName="dnsmasq-dns" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.929679 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="ceilometer-notification-agent" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.929694 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="proxy-httpd" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.929717 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" containerName="ceilometer-central-agent" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.932468 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.935051 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.935249 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.941343 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.969846 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.969911 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-config-data\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.969946 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qghb\" (UniqueName: \"kubernetes.io/projected/5db65970-a223-4081-b6fe-09abfb0fa1ec-kube-api-access-2qghb\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.969979 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-log-httpd\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.970058 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.970083 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-scripts\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:58 crc kubenswrapper[4725]: I1002 11:47:58.970303 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-run-httpd\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: E1002 11:47:59.000256 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf37a2635_e342_497f_95af_54b5968a4daf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfab4c603_5b7c_4538_8630_64e6eabc1b9a.slice/crio-79eff833c374aac2d1682e38e55f349076dddb0b98c5d1cc954cd5801c4a407b\": RecentStats: unable to find data in memory cache]" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.074458 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-log-httpd\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.075107 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.075142 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-scripts\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.075272 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-run-httpd\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.075459 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.075509 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-config-data\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.075542 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qghb\" (UniqueName: \"kubernetes.io/projected/5db65970-a223-4081-b6fe-09abfb0fa1ec-kube-api-access-2qghb\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.077150 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-run-httpd\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.078047 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-log-httpd\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.084567 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.092087 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.095467 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-scripts\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.126637 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-config-data\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.134361 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qghb\" (UniqueName: \"kubernetes.io/projected/5db65970-a223-4081-b6fe-09abfb0fa1ec-kube-api-access-2qghb\") pod \"ceilometer-0\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.262078 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.282638 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37a2635-e342-497f-95af-54b5968a4daf" path="/var/lib/kubelet/pods/f37a2635-e342-497f-95af-54b5968a4daf/volumes" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.283537 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab4c603-5b7c-4538-8630-64e6eabc1b9a" path="/var/lib/kubelet/pods/fab4c603-5b7c-4538-8630-64e6eabc1b9a/volumes" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.472962 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.473225 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.516869 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.521789 4725 generic.go:334] "Generic (PLEG): container finished" podID="2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" containerID="c78d4b1dd802049229fde71c1661b969a85293bd80daccba7c6293af2573c9f8" exitCode=0 Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.521893 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" event={"ID":"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23","Type":"ContainerDied","Data":"c78d4b1dd802049229fde71c1661b969a85293bd80daccba7c6293af2573c9f8"} Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.521928 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" event={"ID":"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23","Type":"ContainerStarted","Data":"76562ec644152f9e0c302c44c2fb7b2a1a2d327a834d399d1a49af3d4502f797"} Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.533403 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d86865b78-gxbbf" event={"ID":"d529b1cf-6961-45e2-984d-640a7321da54","Type":"ContainerStarted","Data":"44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5"} Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.533450 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d86865b78-gxbbf" event={"ID":"d529b1cf-6961-45e2-984d-640a7321da54","Type":"ContainerStarted","Data":"0822123d63fb63d9789d9af9347227d325c9e0829eee353aa72bfe1196c82833"} Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.562658 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.611590 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.779583 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.779643 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.813449 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:47:59 crc kubenswrapper[4725]: W1002 11:47:59.829843 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5db65970_a223_4081_b6fe_09abfb0fa1ec.slice/crio-c478c9d1f2a1a60d48a3178a6713b863e219cecd2578c5b95d91824386c664e2 WatchSource:0}: Error finding container c478c9d1f2a1a60d48a3178a6713b863e219cecd2578c5b95d91824386c664e2: Status 404 returned error can't find the container with id c478c9d1f2a1a60d48a3178a6713b863e219cecd2578c5b95d91824386c664e2 Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.844299 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 02 11:47:59 crc kubenswrapper[4725]: I1002 11:47:59.845188 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.574979 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db65970-a223-4081-b6fe-09abfb0fa1ec","Type":"ContainerStarted","Data":"6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb"} Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.575474 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db65970-a223-4081-b6fe-09abfb0fa1ec","Type":"ContainerStarted","Data":"c478c9d1f2a1a60d48a3178a6713b863e219cecd2578c5b95d91824386c664e2"} Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.587466 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" event={"ID":"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23","Type":"ContainerStarted","Data":"850c3d617ce9d71cb2c66bdf3ea7cafd2e5f98c501746492bb978c39c3c5b103"} Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.589057 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.593977 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d86865b78-gxbbf" event={"ID":"d529b1cf-6961-45e2-984d-640a7321da54","Type":"ContainerStarted","Data":"629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799"} Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.594809 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.602381 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9ced9aab-c4e7-4463-9d29-d32521d07220","Type":"ContainerStarted","Data":"ae37f3b84569f30102eb2fabf11c3f2a620ab6a09a85ce1b2f4475945505b87d"} Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.602445 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.602466 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.603828 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.603861 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.624627 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" podStartSLOduration=3.624604173 podStartE2EDuration="3.624604173s" podCreationTimestamp="2025-10-02 11:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:48:00.607344698 +0000 UTC m=+1200.514844171" watchObservedRunningTime="2025-10-02 11:48:00.624604173 +0000 UTC m=+1200.532103656" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.624781 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-855d67b977-b45rh"] Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.626383 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.631303 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.631404 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.659107 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-855d67b977-b45rh"] Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.670129 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.670113295 podStartE2EDuration="4.670113295s" podCreationTimestamp="2025-10-02 11:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:48:00.639836666 +0000 UTC m=+1200.547336129" watchObservedRunningTime="2025-10-02 11:48:00.670113295 +0000 UTC m=+1200.577612758" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.677701 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d86865b78-gxbbf" podStartSLOduration=3.677687605 podStartE2EDuration="3.677687605s" podCreationTimestamp="2025-10-02 11:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:48:00.666472709 +0000 UTC m=+1200.573972212" watchObservedRunningTime="2025-10-02 11:48:00.677687605 +0000 UTC m=+1200.585187068" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.734335 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-public-tls-certs\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.734578 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-config\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.734598 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-combined-ca-bundle\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.734637 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-ovndb-tls-certs\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.734663 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-internal-tls-certs\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.734704 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-httpd-config\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.734742 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26x5\" (UniqueName: \"kubernetes.io/projected/714afd76-15e2-4584-a68c-50f3d524f3da-kube-api-access-n26x5\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.836603 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-config\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.836655 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-combined-ca-bundle\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.836698 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-ovndb-tls-certs\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.836742 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-internal-tls-certs\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.836772 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-httpd-config\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.836801 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n26x5\" (UniqueName: \"kubernetes.io/projected/714afd76-15e2-4584-a68c-50f3d524f3da-kube-api-access-n26x5\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.836842 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-public-tls-certs\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.843847 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-internal-tls-certs\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.844456 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-public-tls-certs\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.845011 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-ovndb-tls-certs\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.845558 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-httpd-config\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.846022 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-config\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.848089 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714afd76-15e2-4584-a68c-50f3d524f3da-combined-ca-bundle\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.857382 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n26x5\" (UniqueName: \"kubernetes.io/projected/714afd76-15e2-4584-a68c-50f3d524f3da-kube-api-access-n26x5\") pod \"neutron-855d67b977-b45rh\" (UID: \"714afd76-15e2-4584-a68c-50f3d524f3da\") " pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:00 crc kubenswrapper[4725]: I1002 11:48:00.945599 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:01 crc kubenswrapper[4725]: I1002 11:48:01.422857 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-855d67b977-b45rh"] Oct 02 11:48:01 crc kubenswrapper[4725]: I1002 11:48:01.609987 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db65970-a223-4081-b6fe-09abfb0fa1ec","Type":"ContainerStarted","Data":"2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b"} Oct 02 11:48:01 crc kubenswrapper[4725]: I1002 11:48:01.616128 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-855d67b977-b45rh" event={"ID":"714afd76-15e2-4584-a68c-50f3d524f3da","Type":"ContainerStarted","Data":"e571384c5d24de382e217b22ae09349f5e725e138829844f4310598c26290df3"} Oct 02 11:48:01 crc kubenswrapper[4725]: I1002 11:48:01.616199 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:48:01 crc kubenswrapper[4725]: I1002 11:48:01.830220 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.078884 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.138784 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.624158 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-855d67b977-b45rh" event={"ID":"714afd76-15e2-4584-a68c-50f3d524f3da","Type":"ContainerStarted","Data":"ec9048411216412a3c46e94213b7ead82300f4626991b35180118c9904b253eb"} Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.624202 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-855d67b977-b45rh" event={"ID":"714afd76-15e2-4584-a68c-50f3d524f3da","Type":"ContainerStarted","Data":"f08d2206e5471f201f91dfd0890d3675595ccb13da2f01eb997dd347c2b82f52"} Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.624305 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.627686 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db65970-a223-4081-b6fe-09abfb0fa1ec","Type":"ContainerStarted","Data":"3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46"} Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.627814 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.627833 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.627848 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.628571 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ec237bcc-311d-4f76-9a83-ae08624ed97a" containerName="cinder-scheduler" containerID="cri-o://a75ccedad18828ea9eb33a380074c708297e79c11abf35dc6a6d7199ca9c3346" gracePeriod=30 Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.628610 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ec237bcc-311d-4f76-9a83-ae08624ed97a" containerName="probe" containerID="cri-o://6f3bbb170521447430a26ecf8a0961a4fbbc5af7b255d197606e8cb65ea15fe9" gracePeriod=30 Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.646176 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-855d67b977-b45rh" podStartSLOduration=2.646159971 podStartE2EDuration="2.646159971s" podCreationTimestamp="2025-10-02 11:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:48:02.641020755 +0000 UTC m=+1202.548520238" watchObservedRunningTime="2025-10-02 11:48:02.646159971 +0000 UTC m=+1202.553659434" Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.843372 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.844182 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 02 11:48:02 crc kubenswrapper[4725]: I1002 11:48:02.844241 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.637209 4725 generic.go:334] "Generic (PLEG): container finished" podID="ec237bcc-311d-4f76-9a83-ae08624ed97a" containerID="6f3bbb170521447430a26ecf8a0961a4fbbc5af7b255d197606e8cb65ea15fe9" exitCode=0 Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.637247 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec237bcc-311d-4f76-9a83-ae08624ed97a","Type":"ContainerDied","Data":"6f3bbb170521447430a26ecf8a0961a4fbbc5af7b255d197606e8cb65ea15fe9"} Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.715174 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3b1c-account-create-hrzwx"] Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.716363 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b1c-account-create-hrzwx" Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.718517 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.729490 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3b1c-account-create-hrzwx"] Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.808831 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8bc6-account-create-5mhvj"] Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.809962 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8bc6-account-create-5mhvj" Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.813074 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.826118 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8bc6-account-create-5mhvj"] Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.911458 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnc8z\" (UniqueName: \"kubernetes.io/projected/4313ed03-47ea-43a4-b854-634bfd153111-kube-api-access-rnc8z\") pod \"nova-cell0-3b1c-account-create-hrzwx\" (UID: \"4313ed03-47ea-43a4-b854-634bfd153111\") " pod="openstack/nova-cell0-3b1c-account-create-hrzwx" Oct 02 11:48:03 crc kubenswrapper[4725]: I1002 11:48:03.911619 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l47bv\" (UniqueName: \"kubernetes.io/projected/f115e8bf-e0c9-4c1d-90ac-a55224c8eef9-kube-api-access-l47bv\") pod \"nova-cell1-8bc6-account-create-5mhvj\" (UID: \"f115e8bf-e0c9-4c1d-90ac-a55224c8eef9\") " pod="openstack/nova-cell1-8bc6-account-create-5mhvj" Oct 02 11:48:04 crc kubenswrapper[4725]: I1002 11:48:04.013995 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnc8z\" (UniqueName: \"kubernetes.io/projected/4313ed03-47ea-43a4-b854-634bfd153111-kube-api-access-rnc8z\") pod \"nova-cell0-3b1c-account-create-hrzwx\" (UID: \"4313ed03-47ea-43a4-b854-634bfd153111\") " pod="openstack/nova-cell0-3b1c-account-create-hrzwx" Oct 02 11:48:04 crc kubenswrapper[4725]: I1002 11:48:04.014077 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l47bv\" (UniqueName: \"kubernetes.io/projected/f115e8bf-e0c9-4c1d-90ac-a55224c8eef9-kube-api-access-l47bv\") pod \"nova-cell1-8bc6-account-create-5mhvj\" (UID: \"f115e8bf-e0c9-4c1d-90ac-a55224c8eef9\") " pod="openstack/nova-cell1-8bc6-account-create-5mhvj" Oct 02 11:48:04 crc kubenswrapper[4725]: I1002 11:48:04.032318 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l47bv\" (UniqueName: \"kubernetes.io/projected/f115e8bf-e0c9-4c1d-90ac-a55224c8eef9-kube-api-access-l47bv\") pod \"nova-cell1-8bc6-account-create-5mhvj\" (UID: \"f115e8bf-e0c9-4c1d-90ac-a55224c8eef9\") " pod="openstack/nova-cell1-8bc6-account-create-5mhvj" Oct 02 11:48:04 crc kubenswrapper[4725]: I1002 11:48:04.035756 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnc8z\" (UniqueName: \"kubernetes.io/projected/4313ed03-47ea-43a4-b854-634bfd153111-kube-api-access-rnc8z\") pod \"nova-cell0-3b1c-account-create-hrzwx\" (UID: \"4313ed03-47ea-43a4-b854-634bfd153111\") " pod="openstack/nova-cell0-3b1c-account-create-hrzwx" Oct 02 11:48:04 crc kubenswrapper[4725]: I1002 11:48:04.137101 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8bc6-account-create-5mhvj" Oct 02 11:48:04 crc kubenswrapper[4725]: I1002 11:48:04.335045 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b1c-account-create-hrzwx" Oct 02 11:48:04 crc kubenswrapper[4725]: I1002 11:48:04.657887 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db65970-a223-4081-b6fe-09abfb0fa1ec","Type":"ContainerStarted","Data":"81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474"} Oct 02 11:48:04 crc kubenswrapper[4725]: I1002 11:48:04.658594 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 11:48:04 crc kubenswrapper[4725]: W1002 11:48:04.658792 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf115e8bf_e0c9_4c1d_90ac_a55224c8eef9.slice/crio-f55c03a014660cf7d6b9742e39a6004eacee58686a6ada951dc192cdde3cf9af WatchSource:0}: Error finding container f55c03a014660cf7d6b9742e39a6004eacee58686a6ada951dc192cdde3cf9af: Status 404 returned error can't find the container with id f55c03a014660cf7d6b9742e39a6004eacee58686a6ada951dc192cdde3cf9af Oct 02 11:48:04 crc kubenswrapper[4725]: I1002 11:48:04.663129 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8bc6-account-create-5mhvj"] Oct 02 11:48:04 crc kubenswrapper[4725]: I1002 11:48:04.684422 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.633150539 podStartE2EDuration="6.68440345s" podCreationTimestamp="2025-10-02 11:47:58 +0000 UTC" firstStartedPulling="2025-10-02 11:47:59.832382973 +0000 UTC m=+1199.739882436" lastFinishedPulling="2025-10-02 11:48:03.883635884 +0000 UTC m=+1203.791135347" observedRunningTime="2025-10-02 11:48:04.676684766 +0000 UTC m=+1204.584184229" watchObservedRunningTime="2025-10-02 11:48:04.68440345 +0000 UTC m=+1204.591902913" Oct 02 11:48:04 crc kubenswrapper[4725]: I1002 11:48:04.862716 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3b1c-account-create-hrzwx"] Oct 02 11:48:05 crc kubenswrapper[4725]: I1002 11:48:05.674189 4725 generic.go:334] "Generic (PLEG): container finished" podID="ec237bcc-311d-4f76-9a83-ae08624ed97a" containerID="a75ccedad18828ea9eb33a380074c708297e79c11abf35dc6a6d7199ca9c3346" exitCode=0 Oct 02 11:48:05 crc kubenswrapper[4725]: I1002 11:48:05.674534 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec237bcc-311d-4f76-9a83-ae08624ed97a","Type":"ContainerDied","Data":"a75ccedad18828ea9eb33a380074c708297e79c11abf35dc6a6d7199ca9c3346"} Oct 02 11:48:05 crc kubenswrapper[4725]: I1002 11:48:05.676499 4725 generic.go:334] "Generic (PLEG): container finished" podID="4313ed03-47ea-43a4-b854-634bfd153111" containerID="bade04a920a243a4390c715c14c24939ac948e46a6368256b916dfce4a9a105f" exitCode=0 Oct 02 11:48:05 crc kubenswrapper[4725]: I1002 11:48:05.676553 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3b1c-account-create-hrzwx" event={"ID":"4313ed03-47ea-43a4-b854-634bfd153111","Type":"ContainerDied","Data":"bade04a920a243a4390c715c14c24939ac948e46a6368256b916dfce4a9a105f"} Oct 02 11:48:05 crc kubenswrapper[4725]: I1002 11:48:05.676572 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3b1c-account-create-hrzwx" event={"ID":"4313ed03-47ea-43a4-b854-634bfd153111","Type":"ContainerStarted","Data":"60fd98952f1ea3009970b1928dc193e2273edaf1f06a7a4971723f98017c639d"} Oct 02 11:48:05 crc kubenswrapper[4725]: I1002 11:48:05.680259 4725 generic.go:334] "Generic (PLEG): container finished" podID="f115e8bf-e0c9-4c1d-90ac-a55224c8eef9" containerID="ef2fed30b6e70305d56d698d900485db247542728b67bcddf31fac28275739d7" exitCode=0 Oct 02 11:48:05 crc kubenswrapper[4725]: I1002 11:48:05.680414 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8bc6-account-create-5mhvj" event={"ID":"f115e8bf-e0c9-4c1d-90ac-a55224c8eef9","Type":"ContainerDied","Data":"ef2fed30b6e70305d56d698d900485db247542728b67bcddf31fac28275739d7"} Oct 02 11:48:05 crc kubenswrapper[4725]: I1002 11:48:05.680442 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8bc6-account-create-5mhvj" event={"ID":"f115e8bf-e0c9-4c1d-90ac-a55224c8eef9","Type":"ContainerStarted","Data":"f55c03a014660cf7d6b9742e39a6004eacee58686a6ada951dc192cdde3cf9af"} Oct 02 11:48:05 crc kubenswrapper[4725]: I1002 11:48:05.909360 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.051259 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-combined-ca-bundle\") pod \"ec237bcc-311d-4f76-9a83-ae08624ed97a\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.051506 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data-custom\") pod \"ec237bcc-311d-4f76-9a83-ae08624ed97a\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.051528 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec237bcc-311d-4f76-9a83-ae08624ed97a-etc-machine-id\") pod \"ec237bcc-311d-4f76-9a83-ae08624ed97a\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.051608 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-scripts\") pod \"ec237bcc-311d-4f76-9a83-ae08624ed97a\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.051643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec237bcc-311d-4f76-9a83-ae08624ed97a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ec237bcc-311d-4f76-9a83-ae08624ed97a" (UID: "ec237bcc-311d-4f76-9a83-ae08624ed97a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.051710 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data\") pod \"ec237bcc-311d-4f76-9a83-ae08624ed97a\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.051754 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkqhl\" (UniqueName: \"kubernetes.io/projected/ec237bcc-311d-4f76-9a83-ae08624ed97a-kube-api-access-wkqhl\") pod \"ec237bcc-311d-4f76-9a83-ae08624ed97a\" (UID: \"ec237bcc-311d-4f76-9a83-ae08624ed97a\") " Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.052097 4725 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec237bcc-311d-4f76-9a83-ae08624ed97a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.059849 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-scripts" (OuterVolumeSpecName: "scripts") pod "ec237bcc-311d-4f76-9a83-ae08624ed97a" (UID: "ec237bcc-311d-4f76-9a83-ae08624ed97a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.059870 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec237bcc-311d-4f76-9a83-ae08624ed97a-kube-api-access-wkqhl" (OuterVolumeSpecName: "kube-api-access-wkqhl") pod "ec237bcc-311d-4f76-9a83-ae08624ed97a" (UID: "ec237bcc-311d-4f76-9a83-ae08624ed97a"). InnerVolumeSpecName "kube-api-access-wkqhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.064838 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ec237bcc-311d-4f76-9a83-ae08624ed97a" (UID: "ec237bcc-311d-4f76-9a83-ae08624ed97a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.129946 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec237bcc-311d-4f76-9a83-ae08624ed97a" (UID: "ec237bcc-311d-4f76-9a83-ae08624ed97a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.154153 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkqhl\" (UniqueName: \"kubernetes.io/projected/ec237bcc-311d-4f76-9a83-ae08624ed97a-kube-api-access-wkqhl\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.154189 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.154200 4725 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.154210 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.198312 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data" (OuterVolumeSpecName: "config-data") pod "ec237bcc-311d-4f76-9a83-ae08624ed97a" (UID: "ec237bcc-311d-4f76-9a83-ae08624ed97a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.255435 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec237bcc-311d-4f76-9a83-ae08624ed97a-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.691334 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec237bcc-311d-4f76-9a83-ae08624ed97a","Type":"ContainerDied","Data":"834c0a9119a8c7c8c253761b9bb670772f303320803f31e1e451905b8083c8bb"} Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.691390 4725 scope.go:117] "RemoveContainer" containerID="6f3bbb170521447430a26ecf8a0961a4fbbc5af7b255d197606e8cb65ea15fe9" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.691401 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.731192 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.731827 4725 scope.go:117] "RemoveContainer" containerID="a75ccedad18828ea9eb33a380074c708297e79c11abf35dc6a6d7199ca9c3346" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.739836 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.752476 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 11:48:06 crc kubenswrapper[4725]: E1002 11:48:06.753419 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec237bcc-311d-4f76-9a83-ae08624ed97a" containerName="probe" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.753442 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec237bcc-311d-4f76-9a83-ae08624ed97a" containerName="probe" Oct 02 11:48:06 crc kubenswrapper[4725]: E1002 11:48:06.753487 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec237bcc-311d-4f76-9a83-ae08624ed97a" containerName="cinder-scheduler" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.753496 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec237bcc-311d-4f76-9a83-ae08624ed97a" containerName="cinder-scheduler" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.753690 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec237bcc-311d-4f76-9a83-ae08624ed97a" containerName="cinder-scheduler" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.753706 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec237bcc-311d-4f76-9a83-ae08624ed97a" containerName="probe" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.755438 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.758199 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.781259 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.872097 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-config-data\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.872153 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d64jg\" (UniqueName: \"kubernetes.io/projected/58f46069-09a8-4501-95a3-70b3d03ee211-kube-api-access-d64jg\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.872191 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.872237 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.873401 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58f46069-09a8-4501-95a3-70b3d03ee211-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.873440 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-scripts\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.886497 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.886873 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="ceilometer-central-agent" containerID="cri-o://6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb" gracePeriod=30 Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.887115 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="proxy-httpd" containerID="cri-o://81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474" gracePeriod=30 Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.887240 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="sg-core" containerID="cri-o://3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46" gracePeriod=30 Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.887285 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="ceilometer-notification-agent" containerID="cri-o://2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b" gracePeriod=30 Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.974774 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58f46069-09a8-4501-95a3-70b3d03ee211-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.974831 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-scripts\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.974899 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-config-data\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.974921 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d64jg\" (UniqueName: \"kubernetes.io/projected/58f46069-09a8-4501-95a3-70b3d03ee211-kube-api-access-d64jg\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.974944 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.974963 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58f46069-09a8-4501-95a3-70b3d03ee211-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.974972 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.980381 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-scripts\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.985257 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.985970 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-config-data\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.993415 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f46069-09a8-4501-95a3-70b3d03ee211-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:06 crc kubenswrapper[4725]: I1002 11:48:06.995199 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d64jg\" (UniqueName: \"kubernetes.io/projected/58f46069-09a8-4501-95a3-70b3d03ee211-kube-api-access-d64jg\") pod \"cinder-scheduler-0\" (UID: \"58f46069-09a8-4501-95a3-70b3d03ee211\") " pod="openstack/cinder-scheduler-0" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.079857 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.251846 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b1c-account-create-hrzwx" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.264052 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8bc6-account-create-5mhvj" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.293704 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec237bcc-311d-4f76-9a83-ae08624ed97a" path="/var/lib/kubelet/pods/ec237bcc-311d-4f76-9a83-ae08624ed97a/volumes" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.381362 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnc8z\" (UniqueName: \"kubernetes.io/projected/4313ed03-47ea-43a4-b854-634bfd153111-kube-api-access-rnc8z\") pod \"4313ed03-47ea-43a4-b854-634bfd153111\" (UID: \"4313ed03-47ea-43a4-b854-634bfd153111\") " Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.381539 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l47bv\" (UniqueName: \"kubernetes.io/projected/f115e8bf-e0c9-4c1d-90ac-a55224c8eef9-kube-api-access-l47bv\") pod \"f115e8bf-e0c9-4c1d-90ac-a55224c8eef9\" (UID: \"f115e8bf-e0c9-4c1d-90ac-a55224c8eef9\") " Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.390955 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f115e8bf-e0c9-4c1d-90ac-a55224c8eef9-kube-api-access-l47bv" (OuterVolumeSpecName: "kube-api-access-l47bv") pod "f115e8bf-e0c9-4c1d-90ac-a55224c8eef9" (UID: "f115e8bf-e0c9-4c1d-90ac-a55224c8eef9"). InnerVolumeSpecName "kube-api-access-l47bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.391092 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4313ed03-47ea-43a4-b854-634bfd153111-kube-api-access-rnc8z" (OuterVolumeSpecName: "kube-api-access-rnc8z") pod "4313ed03-47ea-43a4-b854-634bfd153111" (UID: "4313ed03-47ea-43a4-b854-634bfd153111"). InnerVolumeSpecName "kube-api-access-rnc8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.483776 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnc8z\" (UniqueName: \"kubernetes.io/projected/4313ed03-47ea-43a4-b854-634bfd153111-kube-api-access-rnc8z\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.483822 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l47bv\" (UniqueName: \"kubernetes.io/projected/f115e8bf-e0c9-4c1d-90ac-a55224c8eef9-kube-api-access-l47bv\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.684741 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.711520 4725 generic.go:334] "Generic (PLEG): container finished" podID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerID="81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474" exitCode=0 Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.712874 4725 generic.go:334] "Generic (PLEG): container finished" podID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerID="3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46" exitCode=2 Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.712962 4725 generic.go:334] "Generic (PLEG): container finished" podID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerID="2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b" exitCode=0 Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.712798 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db65970-a223-4081-b6fe-09abfb0fa1ec","Type":"ContainerDied","Data":"81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474"} Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.713228 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db65970-a223-4081-b6fe-09abfb0fa1ec","Type":"ContainerDied","Data":"3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46"} Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.713390 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db65970-a223-4081-b6fe-09abfb0fa1ec","Type":"ContainerDied","Data":"2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b"} Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.715774 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3b1c-account-create-hrzwx" event={"ID":"4313ed03-47ea-43a4-b854-634bfd153111","Type":"ContainerDied","Data":"60fd98952f1ea3009970b1928dc193e2273edaf1f06a7a4971723f98017c639d"} Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.715880 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fd98952f1ea3009970b1928dc193e2273edaf1f06a7a4971723f98017c639d" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.716044 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b1c-account-create-hrzwx" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.722602 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58f46069-09a8-4501-95a3-70b3d03ee211","Type":"ContainerStarted","Data":"fa5ecec946c6125d678d4193353da6c9b30a1ab21c3a77882af7ceb031cd83c2"} Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.726622 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8bc6-account-create-5mhvj" event={"ID":"f115e8bf-e0c9-4c1d-90ac-a55224c8eef9","Type":"ContainerDied","Data":"f55c03a014660cf7d6b9742e39a6004eacee58686a6ada951dc192cdde3cf9af"} Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.726671 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f55c03a014660cf7d6b9742e39a6004eacee58686a6ada951dc192cdde3cf9af" Oct 02 11:48:07 crc kubenswrapper[4725]: I1002 11:48:07.726764 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8bc6-account-create-5mhvj" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.157961 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.219105 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-tswl9"] Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.219330 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" podUID="d40540af-42c0-4c62-8bb4-3e8ba5f23f82" containerName="dnsmasq-dns" containerID="cri-o://ea1156d54dfb67bd2541feb6f359520e862fd901db5c7699e81bf4451054bf6d" gracePeriod=10 Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.752998 4725 generic.go:334] "Generic (PLEG): container finished" podID="d40540af-42c0-4c62-8bb4-3e8ba5f23f82" containerID="ea1156d54dfb67bd2541feb6f359520e862fd901db5c7699e81bf4451054bf6d" exitCode=0 Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.753390 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" event={"ID":"d40540af-42c0-4c62-8bb4-3e8ba5f23f82","Type":"ContainerDied","Data":"ea1156d54dfb67bd2541feb6f359520e862fd901db5c7699e81bf4451054bf6d"} Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.758076 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58f46069-09a8-4501-95a3-70b3d03ee211","Type":"ContainerStarted","Data":"a415999c3590e0f9cdc5232857e002aac34dc7b4168c323ca43021a19d90d001"} Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.967889 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.969186 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rr568"] Oct 02 11:48:08 crc kubenswrapper[4725]: E1002 11:48:08.969618 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f115e8bf-e0c9-4c1d-90ac-a55224c8eef9" containerName="mariadb-account-create" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.969635 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f115e8bf-e0c9-4c1d-90ac-a55224c8eef9" containerName="mariadb-account-create" Oct 02 11:48:08 crc kubenswrapper[4725]: E1002 11:48:08.969658 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40540af-42c0-4c62-8bb4-3e8ba5f23f82" containerName="dnsmasq-dns" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.969666 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40540af-42c0-4c62-8bb4-3e8ba5f23f82" containerName="dnsmasq-dns" Oct 02 11:48:08 crc kubenswrapper[4725]: E1002 11:48:08.969683 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40540af-42c0-4c62-8bb4-3e8ba5f23f82" containerName="init" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.969690 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40540af-42c0-4c62-8bb4-3e8ba5f23f82" containerName="init" Oct 02 11:48:08 crc kubenswrapper[4725]: E1002 11:48:08.969712 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4313ed03-47ea-43a4-b854-634bfd153111" containerName="mariadb-account-create" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.969739 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4313ed03-47ea-43a4-b854-634bfd153111" containerName="mariadb-account-create" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.969933 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f115e8bf-e0c9-4c1d-90ac-a55224c8eef9" containerName="mariadb-account-create" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.969955 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40540af-42c0-4c62-8bb4-3e8ba5f23f82" containerName="dnsmasq-dns" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.969965 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4313ed03-47ea-43a4-b854-634bfd153111" containerName="mariadb-account-create" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.970651 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.977025 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.977085 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 02 11:48:08 crc kubenswrapper[4725]: I1002 11:48:08.981168 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-blzwj" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.012022 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rr568"] Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.027658 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-svc\") pod \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.029708 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-nb\") pod \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.029850 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-sb\") pod \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.029947 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmn6p\" (UniqueName: \"kubernetes.io/projected/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-kube-api-access-vmn6p\") pod \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.030085 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-swift-storage-0\") pod \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.030234 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-config\") pod \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\" (UID: \"d40540af-42c0-4c62-8bb4-3e8ba5f23f82\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.030633 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.031258 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-config-data\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.032174 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhhbb\" (UniqueName: \"kubernetes.io/projected/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-kube-api-access-hhhbb\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.032275 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-scripts\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.058944 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-kube-api-access-vmn6p" (OuterVolumeSpecName: "kube-api-access-vmn6p") pod "d40540af-42c0-4c62-8bb4-3e8ba5f23f82" (UID: "d40540af-42c0-4c62-8bb4-3e8ba5f23f82"). InnerVolumeSpecName "kube-api-access-vmn6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.123191 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d40540af-42c0-4c62-8bb4-3e8ba5f23f82" (UID: "d40540af-42c0-4c62-8bb4-3e8ba5f23f82"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.128765 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d40540af-42c0-4c62-8bb4-3e8ba5f23f82" (UID: "d40540af-42c0-4c62-8bb4-3e8ba5f23f82"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.135478 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-config-data\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.135582 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-scripts\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.135600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhhbb\" (UniqueName: \"kubernetes.io/projected/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-kube-api-access-hhhbb\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.135660 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.135750 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.135761 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.135770 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmn6p\" (UniqueName: \"kubernetes.io/projected/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-kube-api-access-vmn6p\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.156093 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.160707 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-config-data\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.166166 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhhbb\" (UniqueName: \"kubernetes.io/projected/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-kube-api-access-hhhbb\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.181455 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-scripts\") pod \"nova-cell0-conductor-db-sync-rr568\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.193342 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d40540af-42c0-4c62-8bb4-3e8ba5f23f82" (UID: "d40540af-42c0-4c62-8bb4-3e8ba5f23f82"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.237284 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.249843 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d40540af-42c0-4c62-8bb4-3e8ba5f23f82" (UID: "d40540af-42c0-4c62-8bb4-3e8ba5f23f82"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.284250 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.294286 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-config" (OuterVolumeSpecName: "config") pod "d40540af-42c0-4c62-8bb4-3e8ba5f23f82" (UID: "d40540af-42c0-4c62-8bb4-3e8ba5f23f82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.307541 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.338549 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-config-data\") pod \"5db65970-a223-4081-b6fe-09abfb0fa1ec\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.338606 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-scripts\") pod \"5db65970-a223-4081-b6fe-09abfb0fa1ec\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.338694 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-sg-core-conf-yaml\") pod \"5db65970-a223-4081-b6fe-09abfb0fa1ec\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.338802 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-run-httpd\") pod \"5db65970-a223-4081-b6fe-09abfb0fa1ec\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.338833 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-log-httpd\") pod \"5db65970-a223-4081-b6fe-09abfb0fa1ec\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.338871 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-combined-ca-bundle\") pod \"5db65970-a223-4081-b6fe-09abfb0fa1ec\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.338896 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qghb\" (UniqueName: \"kubernetes.io/projected/5db65970-a223-4081-b6fe-09abfb0fa1ec-kube-api-access-2qghb\") pod \"5db65970-a223-4081-b6fe-09abfb0fa1ec\" (UID: \"5db65970-a223-4081-b6fe-09abfb0fa1ec\") " Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.339377 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.339395 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40540af-42c0-4c62-8bb4-3e8ba5f23f82-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.340546 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5db65970-a223-4081-b6fe-09abfb0fa1ec" (UID: "5db65970-a223-4081-b6fe-09abfb0fa1ec"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.342216 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5db65970-a223-4081-b6fe-09abfb0fa1ec" (UID: "5db65970-a223-4081-b6fe-09abfb0fa1ec"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.353861 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-scripts" (OuterVolumeSpecName: "scripts") pod "5db65970-a223-4081-b6fe-09abfb0fa1ec" (UID: "5db65970-a223-4081-b6fe-09abfb0fa1ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.366877 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db65970-a223-4081-b6fe-09abfb0fa1ec-kube-api-access-2qghb" (OuterVolumeSpecName: "kube-api-access-2qghb") pod "5db65970-a223-4081-b6fe-09abfb0fa1ec" (UID: "5db65970-a223-4081-b6fe-09abfb0fa1ec"). InnerVolumeSpecName "kube-api-access-2qghb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.442992 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.443035 4725 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.443047 4725 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db65970-a223-4081-b6fe-09abfb0fa1ec-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.443058 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qghb\" (UniqueName: \"kubernetes.io/projected/5db65970-a223-4081-b6fe-09abfb0fa1ec-kube-api-access-2qghb\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.448922 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5db65970-a223-4081-b6fe-09abfb0fa1ec" (UID: "5db65970-a223-4081-b6fe-09abfb0fa1ec"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.512893 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-config-data" (OuterVolumeSpecName: "config-data") pod "5db65970-a223-4081-b6fe-09abfb0fa1ec" (UID: "5db65970-a223-4081-b6fe-09abfb0fa1ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.549745 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.550106 4725 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.595887 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5db65970-a223-4081-b6fe-09abfb0fa1ec" (UID: "5db65970-a223-4081-b6fe-09abfb0fa1ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.653929 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db65970-a223-4081-b6fe-09abfb0fa1ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.707145 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.772533 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58f46069-09a8-4501-95a3-70b3d03ee211","Type":"ContainerStarted","Data":"42b19eb95ae142c7bdebd6824bf1c0cea5e1719957b38eb9569019fec3dda492"} Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.787930 4725 generic.go:334] "Generic (PLEG): container finished" podID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerID="6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb" exitCode=0 Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.787990 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db65970-a223-4081-b6fe-09abfb0fa1ec","Type":"ContainerDied","Data":"6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb"} Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.788016 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db65970-a223-4081-b6fe-09abfb0fa1ec","Type":"ContainerDied","Data":"c478c9d1f2a1a60d48a3178a6713b863e219cecd2578c5b95d91824386c664e2"} Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.788032 4725 scope.go:117] "RemoveContainer" containerID="81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.788144 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.828567 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" event={"ID":"d40540af-42c0-4c62-8bb4-3e8ba5f23f82","Type":"ContainerDied","Data":"7119e9b4a4e4eb1a86559e39bd50738597ba59b42d9814a8890e10e0cffab92c"} Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.828886 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-tswl9" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.835882 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.8358550300000003 podStartE2EDuration="3.83585503s" podCreationTimestamp="2025-10-02 11:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:48:09.792023272 +0000 UTC m=+1209.699522735" watchObservedRunningTime="2025-10-02 11:48:09.83585503 +0000 UTC m=+1209.743354503" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.898983 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.919555 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.923946 4725 scope.go:117] "RemoveContainer" containerID="3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.931146 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rr568"] Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.941671 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:09 crc kubenswrapper[4725]: E1002 11:48:09.942116 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="sg-core" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.942128 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="sg-core" Oct 02 11:48:09 crc kubenswrapper[4725]: E1002 11:48:09.942143 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="proxy-httpd" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.942149 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="proxy-httpd" Oct 02 11:48:09 crc kubenswrapper[4725]: E1002 11:48:09.942158 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="ceilometer-central-agent" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.942163 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="ceilometer-central-agent" Oct 02 11:48:09 crc kubenswrapper[4725]: E1002 11:48:09.942173 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="ceilometer-notification-agent" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.942179 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="ceilometer-notification-agent" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.942342 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="sg-core" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.942357 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="ceilometer-central-agent" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.942375 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="ceilometer-notification-agent" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.942386 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" containerName="proxy-httpd" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.944226 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.947209 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.947642 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 11:48:09 crc kubenswrapper[4725]: I1002 11:48:09.947844 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.008887 4725 scope.go:117] "RemoveContainer" containerID="2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.026901 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-tswl9"] Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.031988 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-tswl9"] Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.058755 4725 scope.go:117] "RemoveContainer" containerID="6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.068024 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-scripts\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.068104 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwrh\" (UniqueName: \"kubernetes.io/projected/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-kube-api-access-jmwrh\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.068130 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-log-httpd\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.068657 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-config-data\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.068820 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-run-httpd\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.069057 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.069160 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.091326 4725 scope.go:117] "RemoveContainer" containerID="81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474" Oct 02 11:48:10 crc kubenswrapper[4725]: E1002 11:48:10.091843 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474\": container with ID starting with 81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474 not found: ID does not exist" containerID="81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.091879 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474"} err="failed to get container status \"81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474\": rpc error: code = NotFound desc = could not find container \"81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474\": container with ID starting with 81e400d1d1c1257fc3d0554c802a73e67b0a3d08f32391c57828bbfab8f18474 not found: ID does not exist" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.091905 4725 scope.go:117] "RemoveContainer" containerID="3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46" Oct 02 11:48:10 crc kubenswrapper[4725]: E1002 11:48:10.092435 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46\": container with ID starting with 3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46 not found: ID does not exist" containerID="3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.092467 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46"} err="failed to get container status \"3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46\": rpc error: code = NotFound desc = could not find container \"3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46\": container with ID starting with 3aa3299c3dba114726cdf872ec5ba0a867997eda5022b84c77bd73497fd0fc46 not found: ID does not exist" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.092486 4725 scope.go:117] "RemoveContainer" containerID="2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b" Oct 02 11:48:10 crc kubenswrapper[4725]: E1002 11:48:10.092769 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b\": container with ID starting with 2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b not found: ID does not exist" containerID="2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.092796 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b"} err="failed to get container status \"2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b\": rpc error: code = NotFound desc = could not find container \"2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b\": container with ID starting with 2803dff3fd03d00cfcb837ddb201e0f7ad8f837347e824e17911e8a0d8baaf1b not found: ID does not exist" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.092813 4725 scope.go:117] "RemoveContainer" containerID="6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb" Oct 02 11:48:10 crc kubenswrapper[4725]: E1002 11:48:10.097206 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb\": container with ID starting with 6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb not found: ID does not exist" containerID="6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.097247 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb"} err="failed to get container status \"6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb\": rpc error: code = NotFound desc = could not find container \"6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb\": container with ID starting with 6920cbfce260a996c63cd411a4870585a24a478eb044d3c0eb86cb7170a0dbbb not found: ID does not exist" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.097275 4725 scope.go:117] "RemoveContainer" containerID="ea1156d54dfb67bd2541feb6f359520e862fd901db5c7699e81bf4451054bf6d" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.128772 4725 scope.go:117] "RemoveContainer" containerID="6f55acd9fa6e338d1ed56afb2b77f89d9e6736a615b7d2e499b6f4476a9c0efd" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.173775 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.173834 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.173859 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-scripts\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.173884 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmwrh\" (UniqueName: \"kubernetes.io/projected/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-kube-api-access-jmwrh\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.173904 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-log-httpd\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.173950 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-config-data\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.173975 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-run-httpd\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.174381 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-run-httpd\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.174859 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-log-httpd\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.190190 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-config-data\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.190807 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.191209 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.191254 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-scripts\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.198133 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmwrh\" (UniqueName: \"kubernetes.io/projected/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-kube-api-access-jmwrh\") pod \"ceilometer-0\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.312453 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.839631 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rr568" event={"ID":"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0","Type":"ContainerStarted","Data":"9c7f39b8f06f083051d8ddb6e45cc6dd0083af0766e3896514a78a49cdae3fae"} Oct 02 11:48:10 crc kubenswrapper[4725]: I1002 11:48:10.866624 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:11 crc kubenswrapper[4725]: I1002 11:48:11.278835 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db65970-a223-4081-b6fe-09abfb0fa1ec" path="/var/lib/kubelet/pods/5db65970-a223-4081-b6fe-09abfb0fa1ec/volumes" Oct 02 11:48:11 crc kubenswrapper[4725]: I1002 11:48:11.279870 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40540af-42c0-4c62-8bb4-3e8ba5f23f82" path="/var/lib/kubelet/pods/d40540af-42c0-4c62-8bb4-3e8ba5f23f82/volumes" Oct 02 11:48:11 crc kubenswrapper[4725]: I1002 11:48:11.853716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f1cbd1b-3aa9-48a2-b02e-2353184d604e","Type":"ContainerStarted","Data":"266c9a6bd05ba799a33ad49546d5df50d63e9b2da2a3443946a8d8d3bc393653"} Oct 02 11:48:12 crc kubenswrapper[4725]: I1002 11:48:12.080068 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 02 11:48:12 crc kubenswrapper[4725]: I1002 11:48:12.863563 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f1cbd1b-3aa9-48a2-b02e-2353184d604e","Type":"ContainerStarted","Data":"db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5"} Oct 02 11:48:12 crc kubenswrapper[4725]: I1002 11:48:12.865139 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:48:12 crc kubenswrapper[4725]: I1002 11:48:12.869309 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9b656dd8b-n4tcm" Oct 02 11:48:13 crc kubenswrapper[4725]: I1002 11:48:13.881048 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f1cbd1b-3aa9-48a2-b02e-2353184d604e","Type":"ContainerStarted","Data":"25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f"} Oct 02 11:48:14 crc kubenswrapper[4725]: I1002 11:48:14.890295 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f1cbd1b-3aa9-48a2-b02e-2353184d604e","Type":"ContainerStarted","Data":"986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8"} Oct 02 11:48:14 crc kubenswrapper[4725]: I1002 11:48:14.978574 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:48:14 crc kubenswrapper[4725]: I1002 11:48:14.978643 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:48:16 crc kubenswrapper[4725]: I1002 11:48:16.957076 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:17 crc kubenswrapper[4725]: I1002 11:48:17.329407 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 02 11:48:22 crc kubenswrapper[4725]: I1002 11:48:22.976943 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f1cbd1b-3aa9-48a2-b02e-2353184d604e","Type":"ContainerStarted","Data":"2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f"} Oct 02 11:48:22 crc kubenswrapper[4725]: I1002 11:48:22.977567 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 11:48:22 crc kubenswrapper[4725]: I1002 11:48:22.977134 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="proxy-httpd" containerID="cri-o://2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f" gracePeriod=30 Oct 02 11:48:22 crc kubenswrapper[4725]: I1002 11:48:22.977100 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="ceilometer-central-agent" containerID="cri-o://db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5" gracePeriod=30 Oct 02 11:48:22 crc kubenswrapper[4725]: I1002 11:48:22.977150 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="sg-core" containerID="cri-o://986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8" gracePeriod=30 Oct 02 11:48:22 crc kubenswrapper[4725]: I1002 11:48:22.977163 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="ceilometer-notification-agent" containerID="cri-o://25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f" gracePeriod=30 Oct 02 11:48:22 crc kubenswrapper[4725]: I1002 11:48:22.979655 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rr568" event={"ID":"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0","Type":"ContainerStarted","Data":"2b582f6638491081d26584705e354265af28a6267eede14c895dab78ccba9184"} Oct 02 11:48:22 crc kubenswrapper[4725]: I1002 11:48:22.999159 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.363313416 podStartE2EDuration="13.999140156s" podCreationTimestamp="2025-10-02 11:48:09 +0000 UTC" firstStartedPulling="2025-10-02 11:48:10.892938231 +0000 UTC m=+1210.800437694" lastFinishedPulling="2025-10-02 11:48:22.528764971 +0000 UTC m=+1222.436264434" observedRunningTime="2025-10-02 11:48:22.997961825 +0000 UTC m=+1222.905461298" watchObservedRunningTime="2025-10-02 11:48:22.999140156 +0000 UTC m=+1222.906639629" Oct 02 11:48:23 crc kubenswrapper[4725]: I1002 11:48:23.034628 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rr568" podStartSLOduration=2.480740722 podStartE2EDuration="15.034607713s" podCreationTimestamp="2025-10-02 11:48:08 +0000 UTC" firstStartedPulling="2025-10-02 11:48:09.978494445 +0000 UTC m=+1209.885993908" lastFinishedPulling="2025-10-02 11:48:22.532361436 +0000 UTC m=+1222.439860899" observedRunningTime="2025-10-02 11:48:23.025747989 +0000 UTC m=+1222.933247482" watchObservedRunningTime="2025-10-02 11:48:23.034607713 +0000 UTC m=+1222.942107176" Oct 02 11:48:23 crc kubenswrapper[4725]: I1002 11:48:23.991794 4725 generic.go:334] "Generic (PLEG): container finished" podID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerID="986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8" exitCode=2 Oct 02 11:48:23 crc kubenswrapper[4725]: I1002 11:48:23.992137 4725 generic.go:334] "Generic (PLEG): container finished" podID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerID="25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f" exitCode=0 Oct 02 11:48:23 crc kubenswrapper[4725]: I1002 11:48:23.992150 4725 generic.go:334] "Generic (PLEG): container finished" podID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerID="db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5" exitCode=0 Oct 02 11:48:23 crc kubenswrapper[4725]: I1002 11:48:23.991835 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f1cbd1b-3aa9-48a2-b02e-2353184d604e","Type":"ContainerDied","Data":"986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8"} Oct 02 11:48:23 crc kubenswrapper[4725]: I1002 11:48:23.992218 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f1cbd1b-3aa9-48a2-b02e-2353184d604e","Type":"ContainerDied","Data":"25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f"} Oct 02 11:48:23 crc kubenswrapper[4725]: I1002 11:48:23.992255 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f1cbd1b-3aa9-48a2-b02e-2353184d604e","Type":"ContainerDied","Data":"db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5"} Oct 02 11:48:28 crc kubenswrapper[4725]: I1002 11:48:28.140525 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:48:30 crc kubenswrapper[4725]: I1002 11:48:30.958781 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-855d67b977-b45rh" Oct 02 11:48:31 crc kubenswrapper[4725]: I1002 11:48:31.030921 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d86865b78-gxbbf"] Oct 02 11:48:31 crc kubenswrapper[4725]: I1002 11:48:31.031266 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d86865b78-gxbbf" podUID="d529b1cf-6961-45e2-984d-640a7321da54" containerName="neutron-api" containerID="cri-o://44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5" gracePeriod=30 Oct 02 11:48:31 crc kubenswrapper[4725]: I1002 11:48:31.031890 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d86865b78-gxbbf" podUID="d529b1cf-6961-45e2-984d-640a7321da54" containerName="neutron-httpd" containerID="cri-o://629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799" gracePeriod=30 Oct 02 11:48:32 crc kubenswrapper[4725]: I1002 11:48:32.074682 4725 generic.go:334] "Generic (PLEG): container finished" podID="d529b1cf-6961-45e2-984d-640a7321da54" containerID="629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799" exitCode=0 Oct 02 11:48:32 crc kubenswrapper[4725]: I1002 11:48:32.074757 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d86865b78-gxbbf" event={"ID":"d529b1cf-6961-45e2-984d-640a7321da54","Type":"ContainerDied","Data":"629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799"} Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.710748 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.872843 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-config\") pod \"d529b1cf-6961-45e2-984d-640a7321da54\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.872899 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-combined-ca-bundle\") pod \"d529b1cf-6961-45e2-984d-640a7321da54\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.873002 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv556\" (UniqueName: \"kubernetes.io/projected/d529b1cf-6961-45e2-984d-640a7321da54-kube-api-access-xv556\") pod \"d529b1cf-6961-45e2-984d-640a7321da54\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.873075 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-httpd-config\") pod \"d529b1cf-6961-45e2-984d-640a7321da54\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.873110 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-ovndb-tls-certs\") pod \"d529b1cf-6961-45e2-984d-640a7321da54\" (UID: \"d529b1cf-6961-45e2-984d-640a7321da54\") " Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.886910 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d529b1cf-6961-45e2-984d-640a7321da54-kube-api-access-xv556" (OuterVolumeSpecName: "kube-api-access-xv556") pod "d529b1cf-6961-45e2-984d-640a7321da54" (UID: "d529b1cf-6961-45e2-984d-640a7321da54"). InnerVolumeSpecName "kube-api-access-xv556". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.886972 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d529b1cf-6961-45e2-984d-640a7321da54" (UID: "d529b1cf-6961-45e2-984d-640a7321da54"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.934008 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-config" (OuterVolumeSpecName: "config") pod "d529b1cf-6961-45e2-984d-640a7321da54" (UID: "d529b1cf-6961-45e2-984d-640a7321da54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.940075 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d529b1cf-6961-45e2-984d-640a7321da54" (UID: "d529b1cf-6961-45e2-984d-640a7321da54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.964001 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d529b1cf-6961-45e2-984d-640a7321da54" (UID: "d529b1cf-6961-45e2-984d-640a7321da54"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.975280 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv556\" (UniqueName: \"kubernetes.io/projected/d529b1cf-6961-45e2-984d-640a7321da54-kube-api-access-xv556\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.975317 4725 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.975331 4725 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.975343 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:34 crc kubenswrapper[4725]: I1002 11:48:34.975355 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d529b1cf-6961-45e2-984d-640a7321da54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.106344 4725 generic.go:334] "Generic (PLEG): container finished" podID="cd75c12f-2fcf-485d-9bbe-e5d1564af0e0" containerID="2b582f6638491081d26584705e354265af28a6267eede14c895dab78ccba9184" exitCode=0 Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.106414 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rr568" event={"ID":"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0","Type":"ContainerDied","Data":"2b582f6638491081d26584705e354265af28a6267eede14c895dab78ccba9184"} Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.109152 4725 generic.go:334] "Generic (PLEG): container finished" podID="d529b1cf-6961-45e2-984d-640a7321da54" containerID="44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5" exitCode=0 Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.109191 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d86865b78-gxbbf" Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.109190 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d86865b78-gxbbf" event={"ID":"d529b1cf-6961-45e2-984d-640a7321da54","Type":"ContainerDied","Data":"44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5"} Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.109312 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d86865b78-gxbbf" event={"ID":"d529b1cf-6961-45e2-984d-640a7321da54","Type":"ContainerDied","Data":"0822123d63fb63d9789d9af9347227d325c9e0829eee353aa72bfe1196c82833"} Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.109334 4725 scope.go:117] "RemoveContainer" containerID="629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799" Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.143460 4725 scope.go:117] "RemoveContainer" containerID="44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5" Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.144042 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d86865b78-gxbbf"] Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.152401 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d86865b78-gxbbf"] Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.169860 4725 scope.go:117] "RemoveContainer" containerID="629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799" Oct 02 11:48:35 crc kubenswrapper[4725]: E1002 11:48:35.170289 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799\": container with ID starting with 629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799 not found: ID does not exist" containerID="629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799" Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.170327 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799"} err="failed to get container status \"629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799\": rpc error: code = NotFound desc = could not find container \"629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799\": container with ID starting with 629cdf2007d164368dae0ebe45796f1582b5b3365439fc5d0b776f01a999e799 not found: ID does not exist" Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.170351 4725 scope.go:117] "RemoveContainer" containerID="44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5" Oct 02 11:48:35 crc kubenswrapper[4725]: E1002 11:48:35.170651 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5\": container with ID starting with 44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5 not found: ID does not exist" containerID="44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5" Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.170701 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5"} err="failed to get container status \"44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5\": rpc error: code = NotFound desc = could not find container \"44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5\": container with ID starting with 44a6fbbdb532ad6e629e8ce9b77f6a4eb1519ddfba943b73c0e782b7ffc8bbd5 not found: ID does not exist" Oct 02 11:48:35 crc kubenswrapper[4725]: I1002 11:48:35.278233 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d529b1cf-6961-45e2-984d-640a7321da54" path="/var/lib/kubelet/pods/d529b1cf-6961-45e2-984d-640a7321da54/volumes" Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.471078 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.501049 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-config-data\") pod \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.501139 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhhbb\" (UniqueName: \"kubernetes.io/projected/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-kube-api-access-hhhbb\") pod \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.501165 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-scripts\") pod \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.501211 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-combined-ca-bundle\") pod \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\" (UID: \"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0\") " Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.515419 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-scripts" (OuterVolumeSpecName: "scripts") pod "cd75c12f-2fcf-485d-9bbe-e5d1564af0e0" (UID: "cd75c12f-2fcf-485d-9bbe-e5d1564af0e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.515523 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-kube-api-access-hhhbb" (OuterVolumeSpecName: "kube-api-access-hhhbb") pod "cd75c12f-2fcf-485d-9bbe-e5d1564af0e0" (UID: "cd75c12f-2fcf-485d-9bbe-e5d1564af0e0"). InnerVolumeSpecName "kube-api-access-hhhbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.528522 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd75c12f-2fcf-485d-9bbe-e5d1564af0e0" (UID: "cd75c12f-2fcf-485d-9bbe-e5d1564af0e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.529019 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-config-data" (OuterVolumeSpecName: "config-data") pod "cd75c12f-2fcf-485d-9bbe-e5d1564af0e0" (UID: "cd75c12f-2fcf-485d-9bbe-e5d1564af0e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.602977 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhhbb\" (UniqueName: \"kubernetes.io/projected/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-kube-api-access-hhhbb\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.603027 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.603041 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:36 crc kubenswrapper[4725]: I1002 11:48:36.603054 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.131216 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rr568" event={"ID":"cd75c12f-2fcf-485d-9bbe-e5d1564af0e0","Type":"ContainerDied","Data":"9c7f39b8f06f083051d8ddb6e45cc6dd0083af0766e3896514a78a49cdae3fae"} Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.131256 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c7f39b8f06f083051d8ddb6e45cc6dd0083af0766e3896514a78a49cdae3fae" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.131314 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rr568" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.233566 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 02 11:48:37 crc kubenswrapper[4725]: E1002 11:48:37.234053 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d529b1cf-6961-45e2-984d-640a7321da54" containerName="neutron-httpd" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.234079 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d529b1cf-6961-45e2-984d-640a7321da54" containerName="neutron-httpd" Oct 02 11:48:37 crc kubenswrapper[4725]: E1002 11:48:37.234099 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d529b1cf-6961-45e2-984d-640a7321da54" containerName="neutron-api" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.234107 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d529b1cf-6961-45e2-984d-640a7321da54" containerName="neutron-api" Oct 02 11:48:37 crc kubenswrapper[4725]: E1002 11:48:37.234125 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd75c12f-2fcf-485d-9bbe-e5d1564af0e0" containerName="nova-cell0-conductor-db-sync" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.234137 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd75c12f-2fcf-485d-9bbe-e5d1564af0e0" containerName="nova-cell0-conductor-db-sync" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.234387 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d529b1cf-6961-45e2-984d-640a7321da54" containerName="neutron-api" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.234419 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd75c12f-2fcf-485d-9bbe-e5d1564af0e0" containerName="nova-cell0-conductor-db-sync" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.234430 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d529b1cf-6961-45e2-984d-640a7321da54" containerName="neutron-httpd" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.235137 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.239196 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-blzwj" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.247182 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.294903 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.315153 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdeadf2b-92a4-44ae-803a-493d9ef4a7c2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bdeadf2b-92a4-44ae-803a-493d9ef4a7c2\") " pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.315359 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdeadf2b-92a4-44ae-803a-493d9ef4a7c2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bdeadf2b-92a4-44ae-803a-493d9ef4a7c2\") " pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.315384 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlwzf\" (UniqueName: \"kubernetes.io/projected/bdeadf2b-92a4-44ae-803a-493d9ef4a7c2-kube-api-access-qlwzf\") pod \"nova-cell0-conductor-0\" (UID: \"bdeadf2b-92a4-44ae-803a-493d9ef4a7c2\") " pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.419662 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdeadf2b-92a4-44ae-803a-493d9ef4a7c2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bdeadf2b-92a4-44ae-803a-493d9ef4a7c2\") " pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.423957 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlwzf\" (UniqueName: \"kubernetes.io/projected/bdeadf2b-92a4-44ae-803a-493d9ef4a7c2-kube-api-access-qlwzf\") pod \"nova-cell0-conductor-0\" (UID: \"bdeadf2b-92a4-44ae-803a-493d9ef4a7c2\") " pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.425375 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdeadf2b-92a4-44ae-803a-493d9ef4a7c2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bdeadf2b-92a4-44ae-803a-493d9ef4a7c2\") " pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.432181 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdeadf2b-92a4-44ae-803a-493d9ef4a7c2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bdeadf2b-92a4-44ae-803a-493d9ef4a7c2\") " pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.433709 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdeadf2b-92a4-44ae-803a-493d9ef4a7c2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bdeadf2b-92a4-44ae-803a-493d9ef4a7c2\") " pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.445144 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlwzf\" (UniqueName: \"kubernetes.io/projected/bdeadf2b-92a4-44ae-803a-493d9ef4a7c2-kube-api-access-qlwzf\") pod \"nova-cell0-conductor-0\" (UID: \"bdeadf2b-92a4-44ae-803a-493d9ef4a7c2\") " pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.553837 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:37 crc kubenswrapper[4725]: I1002 11:48:37.785306 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 02 11:48:38 crc kubenswrapper[4725]: I1002 11:48:38.141650 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bdeadf2b-92a4-44ae-803a-493d9ef4a7c2","Type":"ContainerStarted","Data":"296155f552d96933479baca64be4c25644b780be6045189ff0315e53e523d794"} Oct 02 11:48:38 crc kubenswrapper[4725]: I1002 11:48:38.142035 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bdeadf2b-92a4-44ae-803a-493d9ef4a7c2","Type":"ContainerStarted","Data":"d39504f9b89f4ade6d8336c76840cf408c1433ae6663fef6ed1219dc498b4497"} Oct 02 11:48:38 crc kubenswrapper[4725]: I1002 11:48:38.142093 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:38 crc kubenswrapper[4725]: I1002 11:48:38.160392 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.160368871 podStartE2EDuration="1.160368871s" podCreationTimestamp="2025-10-02 11:48:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:48:38.156522569 +0000 UTC m=+1238.064022062" watchObservedRunningTime="2025-10-02 11:48:38.160368871 +0000 UTC m=+1238.067868344" Oct 02 11:48:40 crc kubenswrapper[4725]: I1002 11:48:40.317270 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 02 11:48:42 crc kubenswrapper[4725]: I1002 11:48:42.580000 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.015746 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-b9p69"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.017182 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.021559 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.022300 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.028633 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9p69"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.038291 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-config-data\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.038345 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjc76\" (UniqueName: \"kubernetes.io/projected/92ee54c2-e42c-4233-ad3c-062d28b43fb5-kube-api-access-mjc76\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.038395 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.038435 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-scripts\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.140760 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-scripts\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.140923 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-config-data\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.140978 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjc76\" (UniqueName: \"kubernetes.io/projected/92ee54c2-e42c-4233-ad3c-062d28b43fb5-kube-api-access-mjc76\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.141045 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.148267 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.150174 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-config-data\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.152558 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-scripts\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.187773 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.189677 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.192344 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.198407 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjc76\" (UniqueName: \"kubernetes.io/projected/92ee54c2-e42c-4233-ad3c-062d28b43fb5-kube-api-access-mjc76\") pod \"nova-cell0-cell-mapping-b9p69\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.198490 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.214785 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.215998 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.218468 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.224541 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.243395 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-config-data\") pod \"nova-scheduler-0\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " pod="openstack/nova-scheduler-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.243476 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " pod="openstack/nova-scheduler-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.243501 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qz8k\" (UniqueName: \"kubernetes.io/projected/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-kube-api-access-8qz8k\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.243518 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-config-data\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.243555 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbrhb\" (UniqueName: \"kubernetes.io/projected/67e65026-79a4-4f8e-9fc4-34602344b007-kube-api-access-hbrhb\") pod \"nova-scheduler-0\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " pod="openstack/nova-scheduler-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.243614 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.243633 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-logs\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.348129 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.373323 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-config-data\") pod \"nova-scheduler-0\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " pod="openstack/nova-scheduler-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.373486 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " pod="openstack/nova-scheduler-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.373526 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qz8k\" (UniqueName: \"kubernetes.io/projected/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-kube-api-access-8qz8k\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.373562 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-config-data\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.373652 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbrhb\" (UniqueName: \"kubernetes.io/projected/67e65026-79a4-4f8e-9fc4-34602344b007-kube-api-access-hbrhb\") pod \"nova-scheduler-0\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " pod="openstack/nova-scheduler-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.373678 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.373747 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-logs\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.385877 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-config-data\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.388057 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-config-data\") pod \"nova-scheduler-0\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " pod="openstack/nova-scheduler-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.390018 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-logs\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.393402 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " pod="openstack/nova-scheduler-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.399857 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.413363 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qz8k\" (UniqueName: \"kubernetes.io/projected/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-kube-api-access-8qz8k\") pod \"nova-api-0\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.456261 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.457636 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.458691 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbrhb\" (UniqueName: \"kubernetes.io/projected/67e65026-79a4-4f8e-9fc4-34602344b007-kube-api-access-hbrhb\") pod \"nova-scheduler-0\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " pod="openstack/nova-scheduler-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.493859 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.517582 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.577806 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-logs\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.577881 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-config-data\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.577906 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b6lc\" (UniqueName: \"kubernetes.io/projected/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-kube-api-access-4b6lc\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.577955 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.588964 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-r9lvl"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.605051 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.605858 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.608605 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.613680 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-r9lvl"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.630205 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.631324 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.647117 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.679888 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.679945 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.679974 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-config-data\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.679995 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b6lc\" (UniqueName: \"kubernetes.io/projected/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-kube-api-access-4b6lc\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.680059 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.680103 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.680138 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.680153 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-config\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.680174 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxbc\" (UniqueName: \"kubernetes.io/projected/685c0bd4-be48-464c-8d69-73a8637fbad8-kube-api-access-plxbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.680201 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.680263 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.680287 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wgr9\" (UniqueName: \"kubernetes.io/projected/d1e81b79-e75e-4647-985f-5ac5395faf7f-kube-api-access-7wgr9\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.680312 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-logs\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.680692 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-logs\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.689348 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.692129 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.694307 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-config-data\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.720980 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b6lc\" (UniqueName: \"kubernetes.io/projected/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-kube-api-access-4b6lc\") pod \"nova-metadata-0\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.784609 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.784666 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wgr9\" (UniqueName: \"kubernetes.io/projected/d1e81b79-e75e-4647-985f-5ac5395faf7f-kube-api-access-7wgr9\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.784808 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.784832 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.784877 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.784916 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-config\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.784932 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.784953 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plxbc\" (UniqueName: \"kubernetes.io/projected/685c0bd4-be48-464c-8d69-73a8637fbad8-kube-api-access-plxbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.784982 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.785868 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.786351 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.787428 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.787976 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.796437 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.808481 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.808714 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-config\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.813391 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wgr9\" (UniqueName: \"kubernetes.io/projected/d1e81b79-e75e-4647-985f-5ac5395faf7f-kube-api-access-7wgr9\") pod \"dnsmasq-dns-845d6d6f59-r9lvl\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.838762 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxbc\" (UniqueName: \"kubernetes.io/projected/685c0bd4-be48-464c-8d69-73a8637fbad8-kube-api-access-plxbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.872565 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.948355 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:43 crc kubenswrapper[4725]: I1002 11:48:43.979130 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.237692 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9p69"] Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.372100 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.429615 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:48:44 crc kubenswrapper[4725]: W1002 11:48:44.447766 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67e65026_79a4_4f8e_9fc4_34602344b007.slice/crio-26f55b2d248009fe9601be280d9913527efd4a471d400c63985e9d6150c49801 WatchSource:0}: Error finding container 26f55b2d248009fe9601be280d9913527efd4a471d400c63985e9d6150c49801: Status 404 returned error can't find the container with id 26f55b2d248009fe9601be280d9913527efd4a471d400c63985e9d6150c49801 Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.526195 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.550142 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-r9lvl"] Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.645421 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z786"] Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.648508 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.651541 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.651762 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.664538 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z786"] Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.672614 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 11:48:44 crc kubenswrapper[4725]: W1002 11:48:44.696670 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod685c0bd4_be48_464c_8d69_73a8637fbad8.slice/crio-7a0485736a990de092a4ed25aef95144238f45ade973617f73b95ca53da51f91 WatchSource:0}: Error finding container 7a0485736a990de092a4ed25aef95144238f45ade973617f73b95ca53da51f91: Status 404 returned error can't find the container with id 7a0485736a990de092a4ed25aef95144238f45ade973617f73b95ca53da51f91 Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.709858 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrq2s\" (UniqueName: \"kubernetes.io/projected/3a682c38-9f47-4cf9-9111-c709313b0a72-kube-api-access-zrq2s\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.709960 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.709984 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-scripts\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.710013 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-config-data\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.812061 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.812123 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-scripts\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.812154 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-config-data\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.812255 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrq2s\" (UniqueName: \"kubernetes.io/projected/3a682c38-9f47-4cf9-9111-c709313b0a72-kube-api-access-zrq2s\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.816095 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-config-data\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.816126 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-scripts\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.823378 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.832212 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrq2s\" (UniqueName: \"kubernetes.io/projected/3a682c38-9f47-4cf9-9111-c709313b0a72-kube-api-access-zrq2s\") pod \"nova-cell1-conductor-db-sync-5z786\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.978581 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.979100 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:48:44 crc kubenswrapper[4725]: I1002 11:48:44.979768 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:48:45 crc kubenswrapper[4725]: I1002 11:48:45.220368 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9p69" event={"ID":"92ee54c2-e42c-4233-ad3c-062d28b43fb5","Type":"ContainerStarted","Data":"4f604fa3fa5900f27cd1fe9880a11824e0372a39669b4637da6abf8f76643092"} Oct 02 11:48:45 crc kubenswrapper[4725]: I1002 11:48:45.220654 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9p69" event={"ID":"92ee54c2-e42c-4233-ad3c-062d28b43fb5","Type":"ContainerStarted","Data":"ede88a4bf93cbd8a59fb49bda465a7e2f596e04121d9904fd89a8221ddd271f7"} Oct 02 11:48:45 crc kubenswrapper[4725]: I1002 11:48:45.233602 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"685c0bd4-be48-464c-8d69-73a8637fbad8","Type":"ContainerStarted","Data":"7a0485736a990de092a4ed25aef95144238f45ade973617f73b95ca53da51f91"} Oct 02 11:48:45 crc kubenswrapper[4725]: I1002 11:48:45.252122 4725 generic.go:334] "Generic (PLEG): container finished" podID="d1e81b79-e75e-4647-985f-5ac5395faf7f" containerID="2b4073dfa09153fa5f84647e792316f784d5c780044c7797e6dd267ccd1710db" exitCode=0 Oct 02 11:48:45 crc kubenswrapper[4725]: I1002 11:48:45.252237 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" event={"ID":"d1e81b79-e75e-4647-985f-5ac5395faf7f","Type":"ContainerDied","Data":"2b4073dfa09153fa5f84647e792316f784d5c780044c7797e6dd267ccd1710db"} Oct 02 11:48:45 crc kubenswrapper[4725]: I1002 11:48:45.252271 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" event={"ID":"d1e81b79-e75e-4647-985f-5ac5395faf7f","Type":"ContainerStarted","Data":"b6c258d633883bc6953337f588b064465df0a5f5cb52dbd01b1a95dd66e7f60a"} Oct 02 11:48:45 crc kubenswrapper[4725]: I1002 11:48:45.253779 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67e65026-79a4-4f8e-9fc4-34602344b007","Type":"ContainerStarted","Data":"26f55b2d248009fe9601be280d9913527efd4a471d400c63985e9d6150c49801"} Oct 02 11:48:45 crc kubenswrapper[4725]: I1002 11:48:45.255465 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5","Type":"ContainerStarted","Data":"6a89d8910db4d9e0a11881b7c10a4fdb9331fc9f2f120f708ad401becf29cb9d"} Oct 02 11:48:45 crc kubenswrapper[4725]: I1002 11:48:45.256239 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-b9p69" podStartSLOduration=3.256219844 podStartE2EDuration="3.256219844s" podCreationTimestamp="2025-10-02 11:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:48:45.246184159 +0000 UTC m=+1245.153683622" watchObservedRunningTime="2025-10-02 11:48:45.256219844 +0000 UTC m=+1245.163719307" Oct 02 11:48:45 crc kubenswrapper[4725]: I1002 11:48:45.256499 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1","Type":"ContainerStarted","Data":"f859409dff8d541d8ad97244cde13eaeaf14355dee8ad650f327c024a9efc413"} Oct 02 11:48:45 crc kubenswrapper[4725]: I1002 11:48:45.448813 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z786"] Oct 02 11:48:46 crc kubenswrapper[4725]: I1002 11:48:46.267971 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" event={"ID":"d1e81b79-e75e-4647-985f-5ac5395faf7f","Type":"ContainerStarted","Data":"185ba311d68d38712c9ac15f58a6739a9179be4fdb31f0a351672ba145a13f23"} Oct 02 11:48:46 crc kubenswrapper[4725]: I1002 11:48:46.268408 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:46 crc kubenswrapper[4725]: I1002 11:48:46.272641 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z786" event={"ID":"3a682c38-9f47-4cf9-9111-c709313b0a72","Type":"ContainerStarted","Data":"029ce029af7df613cfa3b5835a2f4542b522b3642a28da583eff90049c57de8f"} Oct 02 11:48:46 crc kubenswrapper[4725]: I1002 11:48:46.272678 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z786" event={"ID":"3a682c38-9f47-4cf9-9111-c709313b0a72","Type":"ContainerStarted","Data":"08d933d71320237dbcacc7ced41bbed6c4ab6f5f43730356a75af5d9e93de42d"} Oct 02 11:48:46 crc kubenswrapper[4725]: I1002 11:48:46.294850 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" podStartSLOduration=3.294830946 podStartE2EDuration="3.294830946s" podCreationTimestamp="2025-10-02 11:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:48:46.291413507 +0000 UTC m=+1246.198912970" watchObservedRunningTime="2025-10-02 11:48:46.294830946 +0000 UTC m=+1246.202330399" Oct 02 11:48:46 crc kubenswrapper[4725]: I1002 11:48:46.314329 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5z786" podStartSLOduration=2.314313601 podStartE2EDuration="2.314313601s" podCreationTimestamp="2025-10-02 11:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:48:46.30667768 +0000 UTC m=+1246.214177163" watchObservedRunningTime="2025-10-02 11:48:46.314313601 +0000 UTC m=+1246.221813064" Oct 02 11:48:47 crc kubenswrapper[4725]: I1002 11:48:47.223087 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 11:48:47 crc kubenswrapper[4725]: I1002 11:48:47.241976 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:48:49 crc kubenswrapper[4725]: I1002 11:48:49.300202 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67e65026-79a4-4f8e-9fc4-34602344b007","Type":"ContainerStarted","Data":"bd4a192cf7a7a112cbd0e1b9f34e54ed2d48f4394a61345216875c358af62609"} Oct 02 11:48:49 crc kubenswrapper[4725]: I1002 11:48:49.302672 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5","Type":"ContainerStarted","Data":"26d6b02052b1ef7123377edc40b16ccb3d80aa4c3b9bfa542b362f2ed8518d91"} Oct 02 11:48:49 crc kubenswrapper[4725]: I1002 11:48:49.302701 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5","Type":"ContainerStarted","Data":"ff123575769971e80fba18e2316f6f041f711471521f547db4084671adf11fab"} Oct 02 11:48:49 crc kubenswrapper[4725]: I1002 11:48:49.304987 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1","Type":"ContainerStarted","Data":"00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3"} Oct 02 11:48:49 crc kubenswrapper[4725]: I1002 11:48:49.305045 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1","Type":"ContainerStarted","Data":"c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a"} Oct 02 11:48:49 crc kubenswrapper[4725]: I1002 11:48:49.306602 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"685c0bd4-be48-464c-8d69-73a8637fbad8","Type":"ContainerStarted","Data":"c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40"} Oct 02 11:48:49 crc kubenswrapper[4725]: I1002 11:48:49.306747 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="685c0bd4-be48-464c-8d69-73a8637fbad8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40" gracePeriod=30 Oct 02 11:48:49 crc kubenswrapper[4725]: I1002 11:48:49.315408 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.739769491 podStartE2EDuration="6.315394405s" podCreationTimestamp="2025-10-02 11:48:43 +0000 UTC" firstStartedPulling="2025-10-02 11:48:44.451740189 +0000 UTC m=+1244.359239652" lastFinishedPulling="2025-10-02 11:48:48.027365103 +0000 UTC m=+1247.934864566" observedRunningTime="2025-10-02 11:48:49.312206511 +0000 UTC m=+1249.219705974" watchObservedRunningTime="2025-10-02 11:48:49.315394405 +0000 UTC m=+1249.222893868" Oct 02 11:48:49 crc kubenswrapper[4725]: I1002 11:48:49.332129 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.001909086 podStartE2EDuration="6.332114867s" podCreationTimestamp="2025-10-02 11:48:43 +0000 UTC" firstStartedPulling="2025-10-02 11:48:44.698911019 +0000 UTC m=+1244.606410482" lastFinishedPulling="2025-10-02 11:48:48.0291168 +0000 UTC m=+1247.936616263" observedRunningTime="2025-10-02 11:48:49.330652058 +0000 UTC m=+1249.238151521" watchObservedRunningTime="2025-10-02 11:48:49.332114867 +0000 UTC m=+1249.239614320" Oct 02 11:48:49 crc kubenswrapper[4725]: I1002 11:48:49.349636 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.709991245 podStartE2EDuration="6.349615349s" podCreationTimestamp="2025-10-02 11:48:43 +0000 UTC" firstStartedPulling="2025-10-02 11:48:44.391462317 +0000 UTC m=+1244.298961780" lastFinishedPulling="2025-10-02 11:48:48.031086421 +0000 UTC m=+1247.938585884" observedRunningTime="2025-10-02 11:48:49.343409036 +0000 UTC m=+1249.250908499" watchObservedRunningTime="2025-10-02 11:48:49.349615349 +0000 UTC m=+1249.257114822" Oct 02 11:48:50 crc kubenswrapper[4725]: I1002 11:48:50.322105 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" containerName="nova-metadata-log" containerID="cri-o://ff123575769971e80fba18e2316f6f041f711471521f547db4084671adf11fab" gracePeriod=30 Oct 02 11:48:50 crc kubenswrapper[4725]: I1002 11:48:50.322566 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" containerName="nova-metadata-metadata" containerID="cri-o://26d6b02052b1ef7123377edc40b16ccb3d80aa4c3b9bfa542b362f2ed8518d91" gracePeriod=30 Oct 02 11:48:50 crc kubenswrapper[4725]: I1002 11:48:50.345868 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.849800261 podStartE2EDuration="7.345851402s" podCreationTimestamp="2025-10-02 11:48:43 +0000 UTC" firstStartedPulling="2025-10-02 11:48:44.533366766 +0000 UTC m=+1244.440866229" lastFinishedPulling="2025-10-02 11:48:48.029417907 +0000 UTC m=+1247.936917370" observedRunningTime="2025-10-02 11:48:50.342164984 +0000 UTC m=+1250.249664477" watchObservedRunningTime="2025-10-02 11:48:50.345851402 +0000 UTC m=+1250.253350865" Oct 02 11:48:50 crc kubenswrapper[4725]: E1002 11:48:50.764693 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff2193e4_d09b_4d0b_86af_3ad15aa7a7f5.slice/crio-conmon-ff123575769971e80fba18e2316f6f041f711471521f547db4084671adf11fab.scope\": RecentStats: unable to find data in memory cache]" Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.334102 4725 generic.go:334] "Generic (PLEG): container finished" podID="ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" containerID="26d6b02052b1ef7123377edc40b16ccb3d80aa4c3b9bfa542b362f2ed8518d91" exitCode=0 Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.334343 4725 generic.go:334] "Generic (PLEG): container finished" podID="ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" containerID="ff123575769971e80fba18e2316f6f041f711471521f547db4084671adf11fab" exitCode=143 Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.334363 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5","Type":"ContainerDied","Data":"26d6b02052b1ef7123377edc40b16ccb3d80aa4c3b9bfa542b362f2ed8518d91"} Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.334389 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5","Type":"ContainerDied","Data":"ff123575769971e80fba18e2316f6f041f711471521f547db4084671adf11fab"} Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.577915 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.645891 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b6lc\" (UniqueName: \"kubernetes.io/projected/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-kube-api-access-4b6lc\") pod \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.646028 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-config-data\") pod \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.646116 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-logs\") pod \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.646148 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-combined-ca-bundle\") pod \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\" (UID: \"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5\") " Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.647787 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-logs" (OuterVolumeSpecName: "logs") pod "ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" (UID: "ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.652341 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-kube-api-access-4b6lc" (OuterVolumeSpecName: "kube-api-access-4b6lc") pod "ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" (UID: "ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5"). InnerVolumeSpecName "kube-api-access-4b6lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.677332 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" (UID: "ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.680088 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-config-data" (OuterVolumeSpecName: "config-data") pod "ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" (UID: "ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.750051 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.750105 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b6lc\" (UniqueName: \"kubernetes.io/projected/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-kube-api-access-4b6lc\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.750130 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:51 crc kubenswrapper[4725]: I1002 11:48:51.750151 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.348647 4725 generic.go:334] "Generic (PLEG): container finished" podID="92ee54c2-e42c-4233-ad3c-062d28b43fb5" containerID="4f604fa3fa5900f27cd1fe9880a11824e0372a39669b4637da6abf8f76643092" exitCode=0 Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.348779 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9p69" event={"ID":"92ee54c2-e42c-4233-ad3c-062d28b43fb5","Type":"ContainerDied","Data":"4f604fa3fa5900f27cd1fe9880a11824e0372a39669b4637da6abf8f76643092"} Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.352251 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5","Type":"ContainerDied","Data":"6a89d8910db4d9e0a11881b7c10a4fdb9331fc9f2f120f708ad401becf29cb9d"} Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.352304 4725 scope.go:117] "RemoveContainer" containerID="26d6b02052b1ef7123377edc40b16ccb3d80aa4c3b9bfa542b362f2ed8518d91" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.352342 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.393831 4725 scope.go:117] "RemoveContainer" containerID="ff123575769971e80fba18e2316f6f041f711471521f547db4084671adf11fab" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.408388 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.420088 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.431299 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:48:52 crc kubenswrapper[4725]: E1002 11:48:52.431688 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" containerName="nova-metadata-log" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.431710 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" containerName="nova-metadata-log" Oct 02 11:48:52 crc kubenswrapper[4725]: E1002 11:48:52.431771 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" containerName="nova-metadata-metadata" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.431779 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" containerName="nova-metadata-metadata" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.431959 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" containerName="nova-metadata-metadata" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.431972 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" containerName="nova-metadata-log" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.432933 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.435056 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.435327 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.454210 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.563333 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-config-data\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.563439 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df56b84-798f-430c-8d9a-2bb16dd1c54a-logs\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.563474 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.563513 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm574\" (UniqueName: \"kubernetes.io/projected/3df56b84-798f-430c-8d9a-2bb16dd1c54a-kube-api-access-fm574\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.563533 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.665211 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.665310 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm574\" (UniqueName: \"kubernetes.io/projected/3df56b84-798f-430c-8d9a-2bb16dd1c54a-kube-api-access-fm574\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.665343 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.665444 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-config-data\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.665524 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df56b84-798f-430c-8d9a-2bb16dd1c54a-logs\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.666119 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df56b84-798f-430c-8d9a-2bb16dd1c54a-logs\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.671378 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.671780 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-config-data\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.673198 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.689456 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm574\" (UniqueName: \"kubernetes.io/projected/3df56b84-798f-430c-8d9a-2bb16dd1c54a-kube-api-access-fm574\") pod \"nova-metadata-0\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " pod="openstack/nova-metadata-0" Oct 02 11:48:52 crc kubenswrapper[4725]: I1002 11:48:52.755505 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.192522 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.284752 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5" path="/var/lib/kubelet/pods/ff2193e4-d09b-4d0b-86af-3ad15aa7a7f5/volumes" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.331866 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.367271 4725 generic.go:334] "Generic (PLEG): container finished" podID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerID="2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f" exitCode=137 Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.367338 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f1cbd1b-3aa9-48a2-b02e-2353184d604e","Type":"ContainerDied","Data":"2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f"} Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.367363 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f1cbd1b-3aa9-48a2-b02e-2353184d604e","Type":"ContainerDied","Data":"266c9a6bd05ba799a33ad49546d5df50d63e9b2da2a3443946a8d8d3bc393653"} Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.367381 4725 scope.go:117] "RemoveContainer" containerID="2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.367479 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.379086 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3df56b84-798f-430c-8d9a-2bb16dd1c54a","Type":"ContainerStarted","Data":"ba0d5e1418acc3fa6019ed59dba717f3fb7bcb19b9d94c0414df970ce1205f87"} Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.389777 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-config-data\") pod \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.389823 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-scripts\") pod \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.389876 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-sg-core-conf-yaml\") pod \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.389925 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmwrh\" (UniqueName: \"kubernetes.io/projected/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-kube-api-access-jmwrh\") pod \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.390059 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-log-httpd\") pod \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.390091 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-combined-ca-bundle\") pod \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.390182 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-run-httpd\") pod \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\" (UID: \"0f1cbd1b-3aa9-48a2-b02e-2353184d604e\") " Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.392257 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0f1cbd1b-3aa9-48a2-b02e-2353184d604e" (UID: "0f1cbd1b-3aa9-48a2-b02e-2353184d604e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.396118 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-scripts" (OuterVolumeSpecName: "scripts") pod "0f1cbd1b-3aa9-48a2-b02e-2353184d604e" (UID: "0f1cbd1b-3aa9-48a2-b02e-2353184d604e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.397260 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0f1cbd1b-3aa9-48a2-b02e-2353184d604e" (UID: "0f1cbd1b-3aa9-48a2-b02e-2353184d604e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.399590 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-kube-api-access-jmwrh" (OuterVolumeSpecName: "kube-api-access-jmwrh") pod "0f1cbd1b-3aa9-48a2-b02e-2353184d604e" (UID: "0f1cbd1b-3aa9-48a2-b02e-2353184d604e"). InnerVolumeSpecName "kube-api-access-jmwrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.417494 4725 scope.go:117] "RemoveContainer" containerID="986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.429931 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0f1cbd1b-3aa9-48a2-b02e-2353184d604e" (UID: "0f1cbd1b-3aa9-48a2-b02e-2353184d604e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.444036 4725 scope.go:117] "RemoveContainer" containerID="25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.461532 4725 scope.go:117] "RemoveContainer" containerID="db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.493072 4725 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.493105 4725 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.493114 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.493123 4725 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.493134 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmwrh\" (UniqueName: \"kubernetes.io/projected/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-kube-api-access-jmwrh\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.500826 4725 scope.go:117] "RemoveContainer" containerID="2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f" Oct 02 11:48:53 crc kubenswrapper[4725]: E1002 11:48:53.501299 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f\": container with ID starting with 2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f not found: ID does not exist" containerID="2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.501357 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f"} err="failed to get container status \"2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f\": rpc error: code = NotFound desc = could not find container \"2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f\": container with ID starting with 2d73ccbb1849a1154ee8f89e75516c5926dde141da3158653823ed51d1a43e8f not found: ID does not exist" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.501388 4725 scope.go:117] "RemoveContainer" containerID="986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8" Oct 02 11:48:53 crc kubenswrapper[4725]: E1002 11:48:53.501807 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8\": container with ID starting with 986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8 not found: ID does not exist" containerID="986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.501870 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8"} err="failed to get container status \"986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8\": rpc error: code = NotFound desc = could not find container \"986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8\": container with ID starting with 986ed97d905a249db9f98b20fca4a9210d2e3a5c1238068163d3346fe1f821b8 not found: ID does not exist" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.501903 4725 scope.go:117] "RemoveContainer" containerID="25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.503448 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-config-data" (OuterVolumeSpecName: "config-data") pod "0f1cbd1b-3aa9-48a2-b02e-2353184d604e" (UID: "0f1cbd1b-3aa9-48a2-b02e-2353184d604e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:53 crc kubenswrapper[4725]: E1002 11:48:53.503975 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f\": container with ID starting with 25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f not found: ID does not exist" containerID="25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.504003 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f"} err="failed to get container status \"25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f\": rpc error: code = NotFound desc = could not find container \"25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f\": container with ID starting with 25169b4de2c638c2275cac18317cf32ef65859dcffb46f82758ab967cd62074f not found: ID does not exist" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.504020 4725 scope.go:117] "RemoveContainer" containerID="db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5" Oct 02 11:48:53 crc kubenswrapper[4725]: E1002 11:48:53.504355 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5\": container with ID starting with db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5 not found: ID does not exist" containerID="db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.504376 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5"} err="failed to get container status \"db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5\": rpc error: code = NotFound desc = could not find container \"db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5\": container with ID starting with db79bd9c64b32ca05e6947a22bbaf09602c586e3531260c65460c8f8873246e5 not found: ID does not exist" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.510185 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f1cbd1b-3aa9-48a2-b02e-2353184d604e" (UID: "0f1cbd1b-3aa9-48a2-b02e-2353184d604e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.594443 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.594475 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1cbd1b-3aa9-48a2-b02e-2353184d604e-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.606669 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.606713 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.610010 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.610038 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.638315 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.812438 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.853808 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.872793 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.896817 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:53 crc kubenswrapper[4725]: E1002 11:48:53.897469 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ee54c2-e42c-4233-ad3c-062d28b43fb5" containerName="nova-manage" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.897496 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ee54c2-e42c-4233-ad3c-062d28b43fb5" containerName="nova-manage" Oct 02 11:48:53 crc kubenswrapper[4725]: E1002 11:48:53.897511 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="proxy-httpd" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.897519 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="proxy-httpd" Oct 02 11:48:53 crc kubenswrapper[4725]: E1002 11:48:53.897542 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="ceilometer-notification-agent" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.897551 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="ceilometer-notification-agent" Oct 02 11:48:53 crc kubenswrapper[4725]: E1002 11:48:53.897586 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="ceilometer-central-agent" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.897595 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="ceilometer-central-agent" Oct 02 11:48:53 crc kubenswrapper[4725]: E1002 11:48:53.897611 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="sg-core" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.897618 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="sg-core" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.897868 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="sg-core" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.897901 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="ceilometer-notification-agent" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.897919 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="proxy-httpd" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.897932 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ee54c2-e42c-4233-ad3c-062d28b43fb5" containerName="nova-manage" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.897940 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" containerName="ceilometer-central-agent" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.898419 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-combined-ca-bundle\") pod \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.898495 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjc76\" (UniqueName: \"kubernetes.io/projected/92ee54c2-e42c-4233-ad3c-062d28b43fb5-kube-api-access-mjc76\") pod \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.898667 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-config-data\") pod \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.898749 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-scripts\") pod \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\" (UID: \"92ee54c2-e42c-4233-ad3c-062d28b43fb5\") " Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.900185 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.910250 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.910943 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.926760 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.957637 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:48:53 crc kubenswrapper[4725]: I1002 11:48:53.980407 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.000656 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-scripts\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.000750 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-config-data\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.000773 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-run-httpd\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.000812 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-log-httpd\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.000848 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lgnl\" (UniqueName: \"kubernetes.io/projected/5a466353-e216-4368-b9fb-386e37daf3b7-kube-api-access-4lgnl\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.000888 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.000939 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.032037 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bvzll"] Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.032288 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" podUID="2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" containerName="dnsmasq-dns" containerID="cri-o://850c3d617ce9d71cb2c66bdf3ea7cafd2e5f98c501746492bb978c39c3c5b103" gracePeriod=10 Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.102835 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.102921 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.102987 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-scripts\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.103043 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-config-data\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.103074 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-run-httpd\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.103110 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-log-httpd\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.103162 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lgnl\" (UniqueName: \"kubernetes.io/projected/5a466353-e216-4368-b9fb-386e37daf3b7-kube-api-access-4lgnl\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.103592 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-run-httpd\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.103644 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-log-httpd\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.113390 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.113503 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-scripts\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.113545 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.130792 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lgnl\" (UniqueName: \"kubernetes.io/projected/5a466353-e216-4368-b9fb-386e37daf3b7-kube-api-access-4lgnl\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.131825 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-config-data\") pod \"ceilometer-0\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.239948 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.393477 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9p69" event={"ID":"92ee54c2-e42c-4233-ad3c-062d28b43fb5","Type":"ContainerDied","Data":"ede88a4bf93cbd8a59fb49bda465a7e2f596e04121d9904fd89a8221ddd271f7"} Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.393529 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ede88a4bf93cbd8a59fb49bda465a7e2f596e04121d9904fd89a8221ddd271f7" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.393492 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9p69" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.395019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3df56b84-798f-430c-8d9a-2bb16dd1c54a","Type":"ContainerStarted","Data":"91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397"} Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.434851 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.543820 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.544356 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerName="nova-api-log" containerID="cri-o://c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a" gracePeriod=30 Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.544434 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerName="nova-api-api" containerID="cri-o://00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3" gracePeriod=30 Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.551513 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": EOF" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.551522 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": EOF" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.558158 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.563431 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-scripts" (OuterVolumeSpecName: "scripts") pod "92ee54c2-e42c-4233-ad3c-062d28b43fb5" (UID: "92ee54c2-e42c-4233-ad3c-062d28b43fb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.614135 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.623482 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ee54c2-e42c-4233-ad3c-062d28b43fb5-kube-api-access-mjc76" (OuterVolumeSpecName: "kube-api-access-mjc76") pod "92ee54c2-e42c-4233-ad3c-062d28b43fb5" (UID: "92ee54c2-e42c-4233-ad3c-062d28b43fb5"). InnerVolumeSpecName "kube-api-access-mjc76". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.716064 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjc76\" (UniqueName: \"kubernetes.io/projected/92ee54c2-e42c-4233-ad3c-062d28b43fb5-kube-api-access-mjc76\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.823946 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-config-data" (OuterVolumeSpecName: "config-data") pod "92ee54c2-e42c-4233-ad3c-062d28b43fb5" (UID: "92ee54c2-e42c-4233-ad3c-062d28b43fb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.832018 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.849060 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92ee54c2-e42c-4233-ad3c-062d28b43fb5" (UID: "92ee54c2-e42c-4233-ad3c-062d28b43fb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.921766 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:54 crc kubenswrapper[4725]: I1002 11:48:54.921796 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92ee54c2-e42c-4233-ad3c-062d28b43fb5-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:48:55 crc kubenswrapper[4725]: I1002 11:48:55.291512 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f1cbd1b-3aa9-48a2-b02e-2353184d604e" path="/var/lib/kubelet/pods/0f1cbd1b-3aa9-48a2-b02e-2353184d604e/volumes" Oct 02 11:48:55 crc kubenswrapper[4725]: I1002 11:48:55.293082 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:48:55 crc kubenswrapper[4725]: W1002 11:48:55.294159 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a466353_e216_4368_b9fb_386e37daf3b7.slice/crio-77cf43219b405d6af3a35049e1b43d8c63ba20200c017de278757c7959783269 WatchSource:0}: Error finding container 77cf43219b405d6af3a35049e1b43d8c63ba20200c017de278757c7959783269: Status 404 returned error can't find the container with id 77cf43219b405d6af3a35049e1b43d8c63ba20200c017de278757c7959783269 Oct 02 11:48:55 crc kubenswrapper[4725]: I1002 11:48:55.408534 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3df56b84-798f-430c-8d9a-2bb16dd1c54a","Type":"ContainerStarted","Data":"4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430"} Oct 02 11:48:55 crc kubenswrapper[4725]: I1002 11:48:55.409902 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a466353-e216-4368-b9fb-386e37daf3b7","Type":"ContainerStarted","Data":"77cf43219b405d6af3a35049e1b43d8c63ba20200c017de278757c7959783269"} Oct 02 11:48:56 crc kubenswrapper[4725]: I1002 11:48:56.427562 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="67e65026-79a4-4f8e-9fc4-34602344b007" containerName="nova-scheduler-scheduler" containerID="cri-o://bd4a192cf7a7a112cbd0e1b9f34e54ed2d48f4394a61345216875c358af62609" gracePeriod=30 Oct 02 11:48:58 crc kubenswrapper[4725]: I1002 11:48:58.157564 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" podUID="2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Oct 02 11:48:58 crc kubenswrapper[4725]: E1002 11:48:58.612466 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd4a192cf7a7a112cbd0e1b9f34e54ed2d48f4394a61345216875c358af62609" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 11:48:58 crc kubenswrapper[4725]: E1002 11:48:58.614239 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd4a192cf7a7a112cbd0e1b9f34e54ed2d48f4394a61345216875c358af62609" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 11:48:58 crc kubenswrapper[4725]: E1002 11:48:58.615611 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd4a192cf7a7a112cbd0e1b9f34e54ed2d48f4394a61345216875c358af62609" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 11:48:58 crc kubenswrapper[4725]: E1002 11:48:58.615643 4725 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="67e65026-79a4-4f8e-9fc4-34602344b007" containerName="nova-scheduler-scheduler" Oct 02 11:49:03 crc kubenswrapper[4725]: I1002 11:49:03.157478 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" podUID="2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Oct 02 11:49:03 crc kubenswrapper[4725]: I1002 11:49:03.200995 4725 generic.go:334] "Generic (PLEG): container finished" podID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerID="c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a" exitCode=-1 Oct 02 11:49:03 crc kubenswrapper[4725]: I1002 11:49:03.201056 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1","Type":"ContainerDied","Data":"c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a"} Oct 02 11:49:03 crc kubenswrapper[4725]: E1002 11:49:03.615207 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd4a192cf7a7a112cbd0e1b9f34e54ed2d48f4394a61345216875c358af62609" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 11:49:03 crc kubenswrapper[4725]: E1002 11:49:03.617183 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd4a192cf7a7a112cbd0e1b9f34e54ed2d48f4394a61345216875c358af62609" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 11:49:03 crc kubenswrapper[4725]: E1002 11:49:03.618817 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd4a192cf7a7a112cbd0e1b9f34e54ed2d48f4394a61345216875c358af62609" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 11:49:03 crc kubenswrapper[4725]: E1002 11:49:03.618874 4725 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="67e65026-79a4-4f8e-9fc4-34602344b007" containerName="nova-scheduler-scheduler" Oct 02 11:49:03 crc kubenswrapper[4725]: I1002 11:49:03.715281 4725 generic.go:334] "Generic (PLEG): container finished" podID="2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" containerID="850c3d617ce9d71cb2c66bdf3ea7cafd2e5f98c501746492bb978c39c3c5b103" exitCode=0 Oct 02 11:49:03 crc kubenswrapper[4725]: I1002 11:49:03.715360 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" event={"ID":"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23","Type":"ContainerDied","Data":"850c3d617ce9d71cb2c66bdf3ea7cafd2e5f98c501746492bb978c39c3c5b103"} Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.327065 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.396298 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztgq8\" (UniqueName: \"kubernetes.io/projected/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-kube-api-access-ztgq8\") pod \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.396363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-nb\") pod \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.396492 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-swift-storage-0\") pod \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.396536 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-sb\") pod \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.396618 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-svc\") pod \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.396645 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-config\") pod \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\" (UID: \"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23\") " Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.424035 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-kube-api-access-ztgq8" (OuterVolumeSpecName: "kube-api-access-ztgq8") pod "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" (UID: "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23"). InnerVolumeSpecName "kube-api-access-ztgq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.465492 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" (UID: "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.469209 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" (UID: "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.477874 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-config" (OuterVolumeSpecName: "config") pod "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" (UID: "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.484273 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" (UID: "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.494629 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" (UID: "2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.500200 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztgq8\" (UniqueName: \"kubernetes.io/projected/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-kube-api-access-ztgq8\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.500246 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.500266 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.500283 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.500300 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.500316 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.727056 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" event={"ID":"2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23","Type":"ContainerDied","Data":"76562ec644152f9e0c302c44c2fb7b2a1a2d327a834d399d1a49af3d4502f797"} Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.727465 4725 scope.go:117] "RemoveContainer" containerID="850c3d617ce9d71cb2c66bdf3ea7cafd2e5f98c501746492bb978c39c3c5b103" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.727088 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3df56b84-798f-430c-8d9a-2bb16dd1c54a" containerName="nova-metadata-log" containerID="cri-o://91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397" gracePeriod=30 Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.727194 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3df56b84-798f-430c-8d9a-2bb16dd1c54a" containerName="nova-metadata-metadata" containerID="cri-o://4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430" gracePeriod=30 Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.727322 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-bvzll" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.749377 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=12.749355955 podStartE2EDuration="12.749355955s" podCreationTimestamp="2025-10-02 11:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:04.747821575 +0000 UTC m=+1264.655321058" watchObservedRunningTime="2025-10-02 11:49:04.749355955 +0000 UTC m=+1264.656855418" Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.794741 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bvzll"] Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.802905 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bvzll"] Oct 02 11:49:04 crc kubenswrapper[4725]: I1002 11:49:04.838172 4725 scope.go:117] "RemoveContainer" containerID="c78d4b1dd802049229fde71c1661b969a85293bd80daccba7c6293af2573c9f8" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.286119 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" path="/var/lib/kubelet/pods/2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23/volumes" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.737814 4725 generic.go:334] "Generic (PLEG): container finished" podID="67e65026-79a4-4f8e-9fc4-34602344b007" containerID="bd4a192cf7a7a112cbd0e1b9f34e54ed2d48f4394a61345216875c358af62609" exitCode=0 Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.738022 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67e65026-79a4-4f8e-9fc4-34602344b007","Type":"ContainerDied","Data":"bd4a192cf7a7a112cbd0e1b9f34e54ed2d48f4394a61345216875c358af62609"} Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.739985 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.740175 4725 generic.go:334] "Generic (PLEG): container finished" podID="3df56b84-798f-430c-8d9a-2bb16dd1c54a" containerID="4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430" exitCode=0 Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.740192 4725 generic.go:334] "Generic (PLEG): container finished" podID="3df56b84-798f-430c-8d9a-2bb16dd1c54a" containerID="91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397" exitCode=143 Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.740227 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3df56b84-798f-430c-8d9a-2bb16dd1c54a","Type":"ContainerDied","Data":"4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430"} Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.740245 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3df56b84-798f-430c-8d9a-2bb16dd1c54a","Type":"ContainerDied","Data":"91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397"} Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.740258 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3df56b84-798f-430c-8d9a-2bb16dd1c54a","Type":"ContainerDied","Data":"ba0d5e1418acc3fa6019ed59dba717f3fb7bcb19b9d94c0414df970ce1205f87"} Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.740274 4725 scope.go:117] "RemoveContainer" containerID="4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.773299 4725 scope.go:117] "RemoveContainer" containerID="91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.828305 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-config-data\") pod \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.828417 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-nova-metadata-tls-certs\") pod \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.828470 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-combined-ca-bundle\") pod \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.828498 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df56b84-798f-430c-8d9a-2bb16dd1c54a-logs\") pod \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.828541 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm574\" (UniqueName: \"kubernetes.io/projected/3df56b84-798f-430c-8d9a-2bb16dd1c54a-kube-api-access-fm574\") pod \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\" (UID: \"3df56b84-798f-430c-8d9a-2bb16dd1c54a\") " Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.829864 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df56b84-798f-430c-8d9a-2bb16dd1c54a-logs" (OuterVolumeSpecName: "logs") pod "3df56b84-798f-430c-8d9a-2bb16dd1c54a" (UID: "3df56b84-798f-430c-8d9a-2bb16dd1c54a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.838192 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df56b84-798f-430c-8d9a-2bb16dd1c54a-kube-api-access-fm574" (OuterVolumeSpecName: "kube-api-access-fm574") pod "3df56b84-798f-430c-8d9a-2bb16dd1c54a" (UID: "3df56b84-798f-430c-8d9a-2bb16dd1c54a"). InnerVolumeSpecName "kube-api-access-fm574". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.869064 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.872535 4725 scope.go:117] "RemoveContainer" containerID="4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430" Oct 02 11:49:05 crc kubenswrapper[4725]: E1002 11:49:05.873030 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430\": container with ID starting with 4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430 not found: ID does not exist" containerID="4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.873066 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430"} err="failed to get container status \"4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430\": rpc error: code = NotFound desc = could not find container \"4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430\": container with ID starting with 4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430 not found: ID does not exist" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.873083 4725 scope.go:117] "RemoveContainer" containerID="91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397" Oct 02 11:49:05 crc kubenswrapper[4725]: E1002 11:49:05.873386 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397\": container with ID starting with 91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397 not found: ID does not exist" containerID="91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.873414 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397"} err="failed to get container status \"91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397\": rpc error: code = NotFound desc = could not find container \"91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397\": container with ID starting with 91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397 not found: ID does not exist" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.873434 4725 scope.go:117] "RemoveContainer" containerID="4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.874258 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430"} err="failed to get container status \"4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430\": rpc error: code = NotFound desc = could not find container \"4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430\": container with ID starting with 4f704ba88fcfe4eca6cfa20dcddf77591f5b49cde4f8b5532082b02477535430 not found: ID does not exist" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.874278 4725 scope.go:117] "RemoveContainer" containerID="91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.874533 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397"} err="failed to get container status \"91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397\": rpc error: code = NotFound desc = could not find container \"91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397\": container with ID starting with 91e1dd403037b1be01038888f8e6015059a1f8cefc0157e76d6cd44ecd546397 not found: ID does not exist" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.876411 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3df56b84-798f-430c-8d9a-2bb16dd1c54a" (UID: "3df56b84-798f-430c-8d9a-2bb16dd1c54a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.880822 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-config-data" (OuterVolumeSpecName: "config-data") pod "3df56b84-798f-430c-8d9a-2bb16dd1c54a" (UID: "3df56b84-798f-430c-8d9a-2bb16dd1c54a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.931005 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-config-data\") pod \"67e65026-79a4-4f8e-9fc4-34602344b007\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.931094 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-combined-ca-bundle\") pod \"67e65026-79a4-4f8e-9fc4-34602344b007\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.931157 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbrhb\" (UniqueName: \"kubernetes.io/projected/67e65026-79a4-4f8e-9fc4-34602344b007-kube-api-access-hbrhb\") pod \"67e65026-79a4-4f8e-9fc4-34602344b007\" (UID: \"67e65026-79a4-4f8e-9fc4-34602344b007\") " Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.931640 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.931662 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.931676 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df56b84-798f-430c-8d9a-2bb16dd1c54a-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.931686 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm574\" (UniqueName: \"kubernetes.io/projected/3df56b84-798f-430c-8d9a-2bb16dd1c54a-kube-api-access-fm574\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.939529 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3df56b84-798f-430c-8d9a-2bb16dd1c54a" (UID: "3df56b84-798f-430c-8d9a-2bb16dd1c54a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.941302 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e65026-79a4-4f8e-9fc4-34602344b007-kube-api-access-hbrhb" (OuterVolumeSpecName: "kube-api-access-hbrhb") pod "67e65026-79a4-4f8e-9fc4-34602344b007" (UID: "67e65026-79a4-4f8e-9fc4-34602344b007"). InnerVolumeSpecName "kube-api-access-hbrhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.965482 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-config-data" (OuterVolumeSpecName: "config-data") pod "67e65026-79a4-4f8e-9fc4-34602344b007" (UID: "67e65026-79a4-4f8e-9fc4-34602344b007"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:05 crc kubenswrapper[4725]: I1002 11:49:05.967526 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67e65026-79a4-4f8e-9fc4-34602344b007" (UID: "67e65026-79a4-4f8e-9fc4-34602344b007"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.033734 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.033772 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbrhb\" (UniqueName: \"kubernetes.io/projected/67e65026-79a4-4f8e-9fc4-34602344b007-kube-api-access-hbrhb\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.033785 4725 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df56b84-798f-430c-8d9a-2bb16dd1c54a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.033793 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e65026-79a4-4f8e-9fc4-34602344b007-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.771077 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a466353-e216-4368-b9fb-386e37daf3b7","Type":"ContainerStarted","Data":"49e147a5fd65e17db5852cae9892052998ef26d956b25bc1f14aeae713deec31"} Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.775845 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67e65026-79a4-4f8e-9fc4-34602344b007","Type":"ContainerDied","Data":"26f55b2d248009fe9601be280d9913527efd4a471d400c63985e9d6150c49801"} Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.775886 4725 scope.go:117] "RemoveContainer" containerID="bd4a192cf7a7a112cbd0e1b9f34e54ed2d48f4394a61345216875c358af62609" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.775962 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.791483 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.840927 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.862965 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.879513 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:49:06 crc kubenswrapper[4725]: E1002 11:49:06.880091 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" containerName="init" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.880122 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" containerName="init" Oct 02 11:49:06 crc kubenswrapper[4725]: E1002 11:49:06.880139 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df56b84-798f-430c-8d9a-2bb16dd1c54a" containerName="nova-metadata-log" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.880151 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df56b84-798f-430c-8d9a-2bb16dd1c54a" containerName="nova-metadata-log" Oct 02 11:49:06 crc kubenswrapper[4725]: E1002 11:49:06.880185 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e65026-79a4-4f8e-9fc4-34602344b007" containerName="nova-scheduler-scheduler" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.880198 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e65026-79a4-4f8e-9fc4-34602344b007" containerName="nova-scheduler-scheduler" Oct 02 11:49:06 crc kubenswrapper[4725]: E1002 11:49:06.880224 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" containerName="dnsmasq-dns" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.880235 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" containerName="dnsmasq-dns" Oct 02 11:49:06 crc kubenswrapper[4725]: E1002 11:49:06.880263 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df56b84-798f-430c-8d9a-2bb16dd1c54a" containerName="nova-metadata-metadata" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.880276 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df56b84-798f-430c-8d9a-2bb16dd1c54a" containerName="nova-metadata-metadata" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.880622 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df56b84-798f-430c-8d9a-2bb16dd1c54a" containerName="nova-metadata-log" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.880659 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df56b84-798f-430c-8d9a-2bb16dd1c54a" containerName="nova-metadata-metadata" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.880697 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5191f7-a514-4c42-bfc7-3f8ea3a3cb23" containerName="dnsmasq-dns" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.880717 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e65026-79a4-4f8e-9fc4-34602344b007" containerName="nova-scheduler-scheduler" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.881650 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.889180 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.901495 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.916917 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.932895 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.949262 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.951407 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.953428 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.954625 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.964175 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.965491 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-config-data\") pod \"nova-scheduler-0\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.965545 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv78j\" (UniqueName: \"kubernetes.io/projected/130b74c4-570d-4e65-a6cb-3b295d5caeae-kube-api-access-xv78j\") pod \"nova-scheduler-0\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:06 crc kubenswrapper[4725]: I1002 11:49:06.965593 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.067717 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-logs\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.067813 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ljf9\" (UniqueName: \"kubernetes.io/projected/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-kube-api-access-9ljf9\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.067855 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.068027 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-config-data\") pod \"nova-scheduler-0\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.068071 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv78j\" (UniqueName: \"kubernetes.io/projected/130b74c4-570d-4e65-a6cb-3b295d5caeae-kube-api-access-xv78j\") pod \"nova-scheduler-0\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.068120 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.068141 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-config-data\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.068163 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.072654 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-config-data\") pod \"nova-scheduler-0\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.072797 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.095137 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv78j\" (UniqueName: \"kubernetes.io/projected/130b74c4-570d-4e65-a6cb-3b295d5caeae-kube-api-access-xv78j\") pod \"nova-scheduler-0\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.171865 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-config-data\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.171914 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.171957 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-logs\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.172323 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ljf9\" (UniqueName: \"kubernetes.io/projected/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-kube-api-access-9ljf9\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.172366 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.172439 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-logs\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.176399 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.176475 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-config-data\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.178221 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.189396 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ljf9\" (UniqueName: \"kubernetes.io/projected/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-kube-api-access-9ljf9\") pod \"nova-metadata-0\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.283699 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df56b84-798f-430c-8d9a-2bb16dd1c54a" path="/var/lib/kubelet/pods/3df56b84-798f-430c-8d9a-2bb16dd1c54a/volumes" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.284671 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e65026-79a4-4f8e-9fc4-34602344b007" path="/var/lib/kubelet/pods/67e65026-79a4-4f8e-9fc4-34602344b007/volumes" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.348259 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.366276 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.818196 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a466353-e216-4368-b9fb-386e37daf3b7","Type":"ContainerStarted","Data":"82901847cb3aa2ed78edc0182eb3f35814fe46b9751b308e8ca19c7705385651"} Oct 02 11:49:07 crc kubenswrapper[4725]: I1002 11:49:07.982360 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.074123 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.492316 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.604029 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-logs\") pod \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.604413 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qz8k\" (UniqueName: \"kubernetes.io/projected/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-kube-api-access-8qz8k\") pod \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.604454 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-combined-ca-bundle\") pod \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.604622 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-config-data\") pod \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\" (UID: \"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1\") " Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.605431 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-logs" (OuterVolumeSpecName: "logs") pod "8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" (UID: "8eadbb0c-ae53-40e9-963f-9f12c89d3ee1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.613823 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-kube-api-access-8qz8k" (OuterVolumeSpecName: "kube-api-access-8qz8k") pod "8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" (UID: "8eadbb0c-ae53-40e9-963f-9f12c89d3ee1"). InnerVolumeSpecName "kube-api-access-8qz8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.637321 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" (UID: "8eadbb0c-ae53-40e9-963f-9f12c89d3ee1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.643026 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-config-data" (OuterVolumeSpecName: "config-data") pod "8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" (UID: "8eadbb0c-ae53-40e9-963f-9f12c89d3ee1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.706794 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.706856 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.706870 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qz8k\" (UniqueName: \"kubernetes.io/projected/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-kube-api-access-8qz8k\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.706883 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.828844 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3","Type":"ContainerStarted","Data":"8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6"} Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.828897 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3","Type":"ContainerStarted","Data":"bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f"} Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.828912 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3","Type":"ContainerStarted","Data":"81fc9280d5540eed2a0111224784a16ef23e8d223b19905bb4d26444f27606b9"} Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.831177 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a466353-e216-4368-b9fb-386e37daf3b7","Type":"ContainerStarted","Data":"66c5857de613d8b019dbddce1829bc72dc907311072c6e4da310f61f81609fb2"} Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.833142 4725 generic.go:334] "Generic (PLEG): container finished" podID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerID="00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3" exitCode=0 Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.833211 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1","Type":"ContainerDied","Data":"00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3"} Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.833222 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.833238 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8eadbb0c-ae53-40e9-963f-9f12c89d3ee1","Type":"ContainerDied","Data":"f859409dff8d541d8ad97244cde13eaeaf14355dee8ad650f327c024a9efc413"} Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.833259 4725 scope.go:117] "RemoveContainer" containerID="00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.836089 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"130b74c4-570d-4e65-a6cb-3b295d5caeae","Type":"ContainerStarted","Data":"2f676c00bffbc8e8ed8bb4bbace441ee23bc25af0f57fa25630300a00d945ce8"} Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.836130 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"130b74c4-570d-4e65-a6cb-3b295d5caeae","Type":"ContainerStarted","Data":"bdf37a3d72906a1cf237d05a8fe11f520f5cb162ee95c6156991b80a93f4d0ca"} Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.867674 4725 scope.go:117] "RemoveContainer" containerID="c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.876919 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.876893522 podStartE2EDuration="2.876893522s" podCreationTimestamp="2025-10-02 11:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:08.866717192 +0000 UTC m=+1268.774216655" watchObservedRunningTime="2025-10-02 11:49:08.876893522 +0000 UTC m=+1268.784392985" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.890146 4725 scope.go:117] "RemoveContainer" containerID="00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3" Oct 02 11:49:08 crc kubenswrapper[4725]: E1002 11:49:08.890623 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3\": container with ID starting with 00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3 not found: ID does not exist" containerID="00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.890665 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3"} err="failed to get container status \"00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3\": rpc error: code = NotFound desc = could not find container \"00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3\": container with ID starting with 00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3 not found: ID does not exist" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.890690 4725 scope.go:117] "RemoveContainer" containerID="c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a" Oct 02 11:49:08 crc kubenswrapper[4725]: E1002 11:49:08.891058 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a\": container with ID starting with c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a not found: ID does not exist" containerID="c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.891096 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a"} err="failed to get container status \"c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a\": rpc error: code = NotFound desc = could not find container \"c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a\": container with ID starting with c7ca13a7211bf9102c75695c4c0d7946ad8aee7a3efbbff420aad74e9f43c72a not found: ID does not exist" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.892099 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.892081162 podStartE2EDuration="2.892081162s" podCreationTimestamp="2025-10-02 11:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:08.889575077 +0000 UTC m=+1268.797074550" watchObservedRunningTime="2025-10-02 11:49:08.892081162 +0000 UTC m=+1268.799580625" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.915946 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.923197 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.944785 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:08 crc kubenswrapper[4725]: E1002 11:49:08.945153 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerName="nova-api-log" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.945169 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerName="nova-api-log" Oct 02 11:49:08 crc kubenswrapper[4725]: E1002 11:49:08.945186 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerName="nova-api-api" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.945192 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerName="nova-api-api" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.945406 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerName="nova-api-api" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.945437 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" containerName="nova-api-log" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.946885 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.949668 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 02 11:49:08 crc kubenswrapper[4725]: I1002 11:49:08.961661 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.015139 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.015214 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzlx4\" (UniqueName: \"kubernetes.io/projected/8b44478a-7204-4221-a0ae-fb44e9960664-kube-api-access-dzlx4\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.015244 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b44478a-7204-4221-a0ae-fb44e9960664-logs\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.015383 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-config-data\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.116936 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-config-data\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.117310 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.117346 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzlx4\" (UniqueName: \"kubernetes.io/projected/8b44478a-7204-4221-a0ae-fb44e9960664-kube-api-access-dzlx4\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.117364 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b44478a-7204-4221-a0ae-fb44e9960664-logs\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.117761 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b44478a-7204-4221-a0ae-fb44e9960664-logs\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.120881 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.123431 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-config-data\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.135474 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzlx4\" (UniqueName: \"kubernetes.io/projected/8b44478a-7204-4221-a0ae-fb44e9960664-kube-api-access-dzlx4\") pod \"nova-api-0\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.261739 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.291495 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eadbb0c-ae53-40e9-963f-9f12c89d3ee1" path="/var/lib/kubelet/pods/8eadbb0c-ae53-40e9-963f-9f12c89d3ee1/volumes" Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.774347 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:09 crc kubenswrapper[4725]: I1002 11:49:09.853245 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b44478a-7204-4221-a0ae-fb44e9960664","Type":"ContainerStarted","Data":"087d14884040bfea93bfdfc60480ba6ab66246a269cca2c5cd68564543c20eaa"} Oct 02 11:49:10 crc kubenswrapper[4725]: I1002 11:49:10.863892 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a466353-e216-4368-b9fb-386e37daf3b7","Type":"ContainerStarted","Data":"baa0522fb1dbc2354810d5394ed2e3a797c6175f8ad25c3f4eb8ffca49e39180"} Oct 02 11:49:10 crc kubenswrapper[4725]: I1002 11:49:10.864395 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 11:49:10 crc kubenswrapper[4725]: I1002 11:49:10.866374 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b44478a-7204-4221-a0ae-fb44e9960664","Type":"ContainerStarted","Data":"4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15"} Oct 02 11:49:10 crc kubenswrapper[4725]: I1002 11:49:10.866403 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b44478a-7204-4221-a0ae-fb44e9960664","Type":"ContainerStarted","Data":"06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7"} Oct 02 11:49:10 crc kubenswrapper[4725]: I1002 11:49:10.887515 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.191576816 podStartE2EDuration="17.887493945s" podCreationTimestamp="2025-10-02 11:48:53 +0000 UTC" firstStartedPulling="2025-10-02 11:48:55.297928854 +0000 UTC m=+1255.205428317" lastFinishedPulling="2025-10-02 11:49:09.993845983 +0000 UTC m=+1269.901345446" observedRunningTime="2025-10-02 11:49:10.883309135 +0000 UTC m=+1270.790808588" watchObservedRunningTime="2025-10-02 11:49:10.887493945 +0000 UTC m=+1270.794993408" Oct 02 11:49:10 crc kubenswrapper[4725]: I1002 11:49:10.911531 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.91151137 podStartE2EDuration="2.91151137s" podCreationTimestamp="2025-10-02 11:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:10.904516605 +0000 UTC m=+1270.812016068" watchObservedRunningTime="2025-10-02 11:49:10.91151137 +0000 UTC m=+1270.819010833" Oct 02 11:49:12 crc kubenswrapper[4725]: I1002 11:49:12.348859 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 02 11:49:12 crc kubenswrapper[4725]: I1002 11:49:12.366827 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 02 11:49:12 crc kubenswrapper[4725]: I1002 11:49:12.367800 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 02 11:49:12 crc kubenswrapper[4725]: I1002 11:49:12.887607 4725 generic.go:334] "Generic (PLEG): container finished" podID="3a682c38-9f47-4cf9-9111-c709313b0a72" containerID="029ce029af7df613cfa3b5835a2f4542b522b3642a28da583eff90049c57de8f" exitCode=0 Oct 02 11:49:12 crc kubenswrapper[4725]: I1002 11:49:12.888985 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z786" event={"ID":"3a682c38-9f47-4cf9-9111-c709313b0a72","Type":"ContainerDied","Data":"029ce029af7df613cfa3b5835a2f4542b522b3642a28da583eff90049c57de8f"} Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.287625 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.428961 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-config-data\") pod \"3a682c38-9f47-4cf9-9111-c709313b0a72\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.429031 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-combined-ca-bundle\") pod \"3a682c38-9f47-4cf9-9111-c709313b0a72\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.429213 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-scripts\") pod \"3a682c38-9f47-4cf9-9111-c709313b0a72\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.429358 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrq2s\" (UniqueName: \"kubernetes.io/projected/3a682c38-9f47-4cf9-9111-c709313b0a72-kube-api-access-zrq2s\") pod \"3a682c38-9f47-4cf9-9111-c709313b0a72\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.435464 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-scripts" (OuterVolumeSpecName: "scripts") pod "3a682c38-9f47-4cf9-9111-c709313b0a72" (UID: "3a682c38-9f47-4cf9-9111-c709313b0a72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.436431 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a682c38-9f47-4cf9-9111-c709313b0a72-kube-api-access-zrq2s" (OuterVolumeSpecName: "kube-api-access-zrq2s") pod "3a682c38-9f47-4cf9-9111-c709313b0a72" (UID: "3a682c38-9f47-4cf9-9111-c709313b0a72"). InnerVolumeSpecName "kube-api-access-zrq2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:14 crc kubenswrapper[4725]: E1002 11:49:14.464345 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-config-data podName:3a682c38-9f47-4cf9-9111-c709313b0a72 nodeName:}" failed. No retries permitted until 2025-10-02 11:49:14.96428177 +0000 UTC m=+1274.871781253 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-config-data") pod "3a682c38-9f47-4cf9-9111-c709313b0a72" (UID: "3a682c38-9f47-4cf9-9111-c709313b0a72") : error deleting /var/lib/kubelet/pods/3a682c38-9f47-4cf9-9111-c709313b0a72/volume-subpaths: remove /var/lib/kubelet/pods/3a682c38-9f47-4cf9-9111-c709313b0a72/volume-subpaths: no such file or directory Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.469820 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a682c38-9f47-4cf9-9111-c709313b0a72" (UID: "3a682c38-9f47-4cf9-9111-c709313b0a72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.531690 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrq2s\" (UniqueName: \"kubernetes.io/projected/3a682c38-9f47-4cf9-9111-c709313b0a72-kube-api-access-zrq2s\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.531744 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.531753 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.909353 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z786" event={"ID":"3a682c38-9f47-4cf9-9111-c709313b0a72","Type":"ContainerDied","Data":"08d933d71320237dbcacc7ced41bbed6c4ab6f5f43730356a75af5d9e93de42d"} Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.909402 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08d933d71320237dbcacc7ced41bbed6c4ab6f5f43730356a75af5d9e93de42d" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.909449 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z786" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.978223 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.978605 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.978654 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.979386 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3af7e2353d25f82db2912213e59998b17803f7721dc32c996865e9d5b9f6bacc"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.979448 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://3af7e2353d25f82db2912213e59998b17803f7721dc32c996865e9d5b9f6bacc" gracePeriod=600 Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.989491 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 02 11:49:14 crc kubenswrapper[4725]: E1002 11:49:14.990112 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a682c38-9f47-4cf9-9111-c709313b0a72" containerName="nova-cell1-conductor-db-sync" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.990144 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a682c38-9f47-4cf9-9111-c709313b0a72" containerName="nova-cell1-conductor-db-sync" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.990368 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a682c38-9f47-4cf9-9111-c709313b0a72" containerName="nova-cell1-conductor-db-sync" Oct 02 11:49:14 crc kubenswrapper[4725]: I1002 11:49:14.991165 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.000077 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.041048 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-config-data\") pod \"3a682c38-9f47-4cf9-9111-c709313b0a72\" (UID: \"3a682c38-9f47-4cf9-9111-c709313b0a72\") " Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.042561 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431a2433-2959-4ab9-a6ed-2dc9dc8ef55a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"431a2433-2959-4ab9-a6ed-2dc9dc8ef55a\") " pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.042843 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431a2433-2959-4ab9-a6ed-2dc9dc8ef55a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"431a2433-2959-4ab9-a6ed-2dc9dc8ef55a\") " pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.043033 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl95h\" (UniqueName: \"kubernetes.io/projected/431a2433-2959-4ab9-a6ed-2dc9dc8ef55a-kube-api-access-rl95h\") pod \"nova-cell1-conductor-0\" (UID: \"431a2433-2959-4ab9-a6ed-2dc9dc8ef55a\") " pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.054324 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-config-data" (OuterVolumeSpecName: "config-data") pod "3a682c38-9f47-4cf9-9111-c709313b0a72" (UID: "3a682c38-9f47-4cf9-9111-c709313b0a72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.145480 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431a2433-2959-4ab9-a6ed-2dc9dc8ef55a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"431a2433-2959-4ab9-a6ed-2dc9dc8ef55a\") " pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.145590 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl95h\" (UniqueName: \"kubernetes.io/projected/431a2433-2959-4ab9-a6ed-2dc9dc8ef55a-kube-api-access-rl95h\") pod \"nova-cell1-conductor-0\" (UID: \"431a2433-2959-4ab9-a6ed-2dc9dc8ef55a\") " pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.145682 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431a2433-2959-4ab9-a6ed-2dc9dc8ef55a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"431a2433-2959-4ab9-a6ed-2dc9dc8ef55a\") " pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.145772 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a682c38-9f47-4cf9-9111-c709313b0a72-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.151038 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431a2433-2959-4ab9-a6ed-2dc9dc8ef55a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"431a2433-2959-4ab9-a6ed-2dc9dc8ef55a\") " pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.158243 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431a2433-2959-4ab9-a6ed-2dc9dc8ef55a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"431a2433-2959-4ab9-a6ed-2dc9dc8ef55a\") " pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.160093 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl95h\" (UniqueName: \"kubernetes.io/projected/431a2433-2959-4ab9-a6ed-2dc9dc8ef55a-kube-api-access-rl95h\") pod \"nova-cell1-conductor-0\" (UID: \"431a2433-2959-4ab9-a6ed-2dc9dc8ef55a\") " pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.331333 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.762262 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.919884 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="3af7e2353d25f82db2912213e59998b17803f7721dc32c996865e9d5b9f6bacc" exitCode=0 Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.920215 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"3af7e2353d25f82db2912213e59998b17803f7721dc32c996865e9d5b9f6bacc"} Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.920243 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"a16429b8c29887ced1bf90fd0b8fd7b846f3af47c6a14a9d7929f1cccc3dad08"} Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.920259 4725 scope.go:117] "RemoveContainer" containerID="f8e953302b3d29f33ec1243f1fa79e72a6d8548353e0303bbf9f533c0d55729e" Oct 02 11:49:15 crc kubenswrapper[4725]: I1002 11:49:15.921304 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"431a2433-2959-4ab9-a6ed-2dc9dc8ef55a","Type":"ContainerStarted","Data":"ca5a8c8e0dc360b3ad7ddbd4b84e97d4e69f3920c30688ef74d4d98d72400745"} Oct 02 11:49:16 crc kubenswrapper[4725]: I1002 11:49:16.935208 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"431a2433-2959-4ab9-a6ed-2dc9dc8ef55a","Type":"ContainerStarted","Data":"31cf15702c8b93a42fea192e754e99cf77e2d2cbd8a3d23feccf0418797a2a5f"} Oct 02 11:49:16 crc kubenswrapper[4725]: I1002 11:49:16.935331 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:16 crc kubenswrapper[4725]: I1002 11:49:16.954605 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.954588438 podStartE2EDuration="2.954588438s" podCreationTimestamp="2025-10-02 11:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:16.949045412 +0000 UTC m=+1276.856544885" watchObservedRunningTime="2025-10-02 11:49:16.954588438 +0000 UTC m=+1276.862087891" Oct 02 11:49:17 crc kubenswrapper[4725]: I1002 11:49:17.349148 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 02 11:49:17 crc kubenswrapper[4725]: I1002 11:49:17.367692 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 02 11:49:17 crc kubenswrapper[4725]: I1002 11:49:17.367757 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 02 11:49:17 crc kubenswrapper[4725]: I1002 11:49:17.377691 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 02 11:49:17 crc kubenswrapper[4725]: I1002 11:49:17.972370 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 02 11:49:18 crc kubenswrapper[4725]: I1002 11:49:18.400174 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 11:49:18 crc kubenswrapper[4725]: I1002 11:49:18.400263 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.261896 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.261946 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 11:49:19 crc kubenswrapper[4725]: E1002 11:49:19.588019 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e9bad7c_78f8_435d_8449_7c5b04a16869.slice/crio-conmon-3af7e2353d25f82db2912213e59998b17803f7721dc32c996865e9d5b9f6bacc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a682c38_9f47_4cf9_9111_c709313b0a72.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod685c0bd4_be48_464c_8d69_73a8637fbad8.slice/crio-c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a682c38_9f47_4cf9_9111_c709313b0a72.slice/crio-029ce029af7df613cfa3b5835a2f4542b522b3642a28da583eff90049c57de8f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eadbb0c_ae53_40e9_963f_9f12c89d3ee1.slice/crio-f859409dff8d541d8ad97244cde13eaeaf14355dee8ad650f327c024a9efc413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a682c38_9f47_4cf9_9111_c709313b0a72.slice/crio-conmon-029ce029af7df613cfa3b5835a2f4542b522b3642a28da583eff90049c57de8f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eadbb0c_ae53_40e9_963f_9f12c89d3ee1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a682c38_9f47_4cf9_9111_c709313b0a72.slice/crio-08d933d71320237dbcacc7ced41bbed6c4ab6f5f43730356a75af5d9e93de42d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eadbb0c_ae53_40e9_963f_9f12c89d3ee1.slice/crio-conmon-00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eadbb0c_ae53_40e9_963f_9f12c89d3ee1.slice/crio-00a8fa35e3e95d06dc975505301cebe9c71b0f705eb88bd488d3fcc8710e53b3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e9bad7c_78f8_435d_8449_7c5b04a16869.slice/crio-3af7e2353d25f82db2912213e59998b17803f7721dc32c996865e9d5b9f6bacc.scope\": RecentStats: unable to find data in memory cache]" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.751844 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.828274 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-combined-ca-bundle\") pod \"685c0bd4-be48-464c-8d69-73a8637fbad8\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.828416 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-config-data\") pod \"685c0bd4-be48-464c-8d69-73a8637fbad8\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.828628 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plxbc\" (UniqueName: \"kubernetes.io/projected/685c0bd4-be48-464c-8d69-73a8637fbad8-kube-api-access-plxbc\") pod \"685c0bd4-be48-464c-8d69-73a8637fbad8\" (UID: \"685c0bd4-be48-464c-8d69-73a8637fbad8\") " Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.836234 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/685c0bd4-be48-464c-8d69-73a8637fbad8-kube-api-access-plxbc" (OuterVolumeSpecName: "kube-api-access-plxbc") pod "685c0bd4-be48-464c-8d69-73a8637fbad8" (UID: "685c0bd4-be48-464c-8d69-73a8637fbad8"). InnerVolumeSpecName "kube-api-access-plxbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.857632 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "685c0bd4-be48-464c-8d69-73a8637fbad8" (UID: "685c0bd4-be48-464c-8d69-73a8637fbad8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.875428 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-config-data" (OuterVolumeSpecName: "config-data") pod "685c0bd4-be48-464c-8d69-73a8637fbad8" (UID: "685c0bd4-be48-464c-8d69-73a8637fbad8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.930978 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.931036 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685c0bd4-be48-464c-8d69-73a8637fbad8-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.931048 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plxbc\" (UniqueName: \"kubernetes.io/projected/685c0bd4-be48-464c-8d69-73a8637fbad8-kube-api-access-plxbc\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.963527 4725 generic.go:334] "Generic (PLEG): container finished" podID="685c0bd4-be48-464c-8d69-73a8637fbad8" containerID="c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40" exitCode=137 Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.963565 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"685c0bd4-be48-464c-8d69-73a8637fbad8","Type":"ContainerDied","Data":"c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40"} Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.963972 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"685c0bd4-be48-464c-8d69-73a8637fbad8","Type":"ContainerDied","Data":"7a0485736a990de092a4ed25aef95144238f45ade973617f73b95ca53da51f91"} Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.963603 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.964004 4725 scope.go:117] "RemoveContainer" containerID="c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.989615 4725 scope.go:117] "RemoveContainer" containerID="c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40" Oct 02 11:49:19 crc kubenswrapper[4725]: E1002 11:49:19.990904 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40\": container with ID starting with c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40 not found: ID does not exist" containerID="c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40" Oct 02 11:49:19 crc kubenswrapper[4725]: I1002 11:49:19.990953 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40"} err="failed to get container status \"c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40\": rpc error: code = NotFound desc = could not find container \"c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40\": container with ID starting with c7aa189a01f157073fdb378b87f57f8dcc33d9ebc95003b6459e1bf1ee290d40 not found: ID does not exist" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.008249 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.023871 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.033468 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 11:49:20 crc kubenswrapper[4725]: E1002 11:49:20.033895 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685c0bd4-be48-464c-8d69-73a8637fbad8" containerName="nova-cell1-novncproxy-novncproxy" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.033914 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="685c0bd4-be48-464c-8d69-73a8637fbad8" containerName="nova-cell1-novncproxy-novncproxy" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.034121 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="685c0bd4-be48-464c-8d69-73a8637fbad8" containerName="nova-cell1-novncproxy-novncproxy" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.034729 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.036927 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.037037 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.037258 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.041440 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.134618 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk9qm\" (UniqueName: \"kubernetes.io/projected/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-kube-api-access-dk9qm\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.134681 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.134840 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.134885 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.134934 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.236928 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk9qm\" (UniqueName: \"kubernetes.io/projected/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-kube-api-access-dk9qm\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.237012 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.237106 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.237929 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.237997 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.242141 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.242393 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.242723 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.243125 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.260579 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk9qm\" (UniqueName: \"kubernetes.io/projected/eb9c8f07-9e51-46ad-87b1-a71668a04d3d-kube-api-access-dk9qm\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb9c8f07-9e51-46ad-87b1-a71668a04d3d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.344353 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b44478a-7204-4221-a0ae-fb44e9960664" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.344385 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b44478a-7204-4221-a0ae-fb44e9960664" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.357490 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.386359 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 02 11:49:20 crc kubenswrapper[4725]: W1002 11:49:20.820963 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb9c8f07_9e51_46ad_87b1_a71668a04d3d.slice/crio-5b7f662a7e0fb33a5c021ee440ce8b074936efb3874be60e1d8e6d7a98880a4a WatchSource:0}: Error finding container 5b7f662a7e0fb33a5c021ee440ce8b074936efb3874be60e1d8e6d7a98880a4a: Status 404 returned error can't find the container with id 5b7f662a7e0fb33a5c021ee440ce8b074936efb3874be60e1d8e6d7a98880a4a Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.826664 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 11:49:20 crc kubenswrapper[4725]: I1002 11:49:20.976533 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb9c8f07-9e51-46ad-87b1-a71668a04d3d","Type":"ContainerStarted","Data":"5b7f662a7e0fb33a5c021ee440ce8b074936efb3874be60e1d8e6d7a98880a4a"} Oct 02 11:49:21 crc kubenswrapper[4725]: I1002 11:49:21.281426 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="685c0bd4-be48-464c-8d69-73a8637fbad8" path="/var/lib/kubelet/pods/685c0bd4-be48-464c-8d69-73a8637fbad8/volumes" Oct 02 11:49:21 crc kubenswrapper[4725]: I1002 11:49:21.998483 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb9c8f07-9e51-46ad-87b1-a71668a04d3d","Type":"ContainerStarted","Data":"bc1364a8508a085ec2733acc7c8fd2d76e8cfc6748f18fc9d2df681356c8fc3b"} Oct 02 11:49:22 crc kubenswrapper[4725]: I1002 11:49:22.024487 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.024464773 podStartE2EDuration="2.024464773s" podCreationTimestamp="2025-10-02 11:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:22.012553188 +0000 UTC m=+1281.920052651" watchObservedRunningTime="2025-10-02 11:49:22.024464773 +0000 UTC m=+1281.931964236" Oct 02 11:49:24 crc kubenswrapper[4725]: I1002 11:49:24.251150 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 02 11:49:25 crc kubenswrapper[4725]: I1002 11:49:25.358597 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:27 crc kubenswrapper[4725]: I1002 11:49:27.374763 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 02 11:49:27 crc kubenswrapper[4725]: I1002 11:49:27.375796 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 02 11:49:27 crc kubenswrapper[4725]: I1002 11:49:27.379492 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 02 11:49:27 crc kubenswrapper[4725]: I1002 11:49:27.521223 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 11:49:27 crc kubenswrapper[4725]: I1002 11:49:27.521505 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="eb59bbc2-e952-462f-a94a-30eeae1b81cd" containerName="kube-state-metrics" containerID="cri-o://4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492" gracePeriod=30 Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.040808 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.049955 4725 generic.go:334] "Generic (PLEG): container finished" podID="eb59bbc2-e952-462f-a94a-30eeae1b81cd" containerID="4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492" exitCode=2 Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.050140 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eb59bbc2-e952-462f-a94a-30eeae1b81cd","Type":"ContainerDied","Data":"4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492"} Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.050198 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eb59bbc2-e952-462f-a94a-30eeae1b81cd","Type":"ContainerDied","Data":"24945d57acc4d732bca8fe15f3739ddc307cc6f1df87a6e2123c966bede890dd"} Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.050217 4725 scope.go:117] "RemoveContainer" containerID="4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.051123 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.083625 4725 scope.go:117] "RemoveContainer" containerID="4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.092479 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcpjb\" (UniqueName: \"kubernetes.io/projected/eb59bbc2-e952-462f-a94a-30eeae1b81cd-kube-api-access-tcpjb\") pod \"eb59bbc2-e952-462f-a94a-30eeae1b81cd\" (UID: \"eb59bbc2-e952-462f-a94a-30eeae1b81cd\") " Oct 02 11:49:28 crc kubenswrapper[4725]: E1002 11:49:28.093913 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492\": container with ID starting with 4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492 not found: ID does not exist" containerID="4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.093964 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492"} err="failed to get container status \"4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492\": rpc error: code = NotFound desc = could not find container \"4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492\": container with ID starting with 4fa2439a22abdfd70128a82442223bdd51adb0d040517cecd55efe4100350492 not found: ID does not exist" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.126068 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb59bbc2-e952-462f-a94a-30eeae1b81cd-kube-api-access-tcpjb" (OuterVolumeSpecName: "kube-api-access-tcpjb") pod "eb59bbc2-e952-462f-a94a-30eeae1b81cd" (UID: "eb59bbc2-e952-462f-a94a-30eeae1b81cd"). InnerVolumeSpecName "kube-api-access-tcpjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.195889 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcpjb\" (UniqueName: \"kubernetes.io/projected/eb59bbc2-e952-462f-a94a-30eeae1b81cd-kube-api-access-tcpjb\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.201616 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.391858 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.399441 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.412561 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 11:49:28 crc kubenswrapper[4725]: E1002 11:49:28.413030 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb59bbc2-e952-462f-a94a-30eeae1b81cd" containerName="kube-state-metrics" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.413051 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb59bbc2-e952-462f-a94a-30eeae1b81cd" containerName="kube-state-metrics" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.413231 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb59bbc2-e952-462f-a94a-30eeae1b81cd" containerName="kube-state-metrics" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.413894 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.416042 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.418813 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.428428 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.510830 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kkln\" (UniqueName: \"kubernetes.io/projected/e4b7ce88-f603-426c-9af7-b2cccde7469d-kube-api-access-5kkln\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.510921 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b7ce88-f603-426c-9af7-b2cccde7469d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.511003 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e4b7ce88-f603-426c-9af7-b2cccde7469d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.511084 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4b7ce88-f603-426c-9af7-b2cccde7469d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.612850 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kkln\" (UniqueName: \"kubernetes.io/projected/e4b7ce88-f603-426c-9af7-b2cccde7469d-kube-api-access-5kkln\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.612910 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b7ce88-f603-426c-9af7-b2cccde7469d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.612971 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e4b7ce88-f603-426c-9af7-b2cccde7469d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.613048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4b7ce88-f603-426c-9af7-b2cccde7469d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.616500 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e4b7ce88-f603-426c-9af7-b2cccde7469d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.617451 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b7ce88-f603-426c-9af7-b2cccde7469d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.617765 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4b7ce88-f603-426c-9af7-b2cccde7469d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.630702 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kkln\" (UniqueName: \"kubernetes.io/projected/e4b7ce88-f603-426c-9af7-b2cccde7469d-kube-api-access-5kkln\") pod \"kube-state-metrics-0\" (UID: \"e4b7ce88-f603-426c-9af7-b2cccde7469d\") " pod="openstack/kube-state-metrics-0" Oct 02 11:49:28 crc kubenswrapper[4725]: I1002 11:49:28.731408 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.227711 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.279839 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb59bbc2-e952-462f-a94a-30eeae1b81cd" path="/var/lib/kubelet/pods/eb59bbc2-e952-462f-a94a-30eeae1b81cd/volumes" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.280876 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.280934 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.281962 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.282152 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.289116 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.290654 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.443700 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-2qjjr"] Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.455493 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.471291 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-2qjjr"] Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.482772 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.483080 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="ceilometer-central-agent" containerID="cri-o://49e147a5fd65e17db5852cae9892052998ef26d956b25bc1f14aeae713deec31" gracePeriod=30 Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.483460 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="proxy-httpd" containerID="cri-o://baa0522fb1dbc2354810d5394ed2e3a797c6175f8ad25c3f4eb8ffca49e39180" gracePeriod=30 Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.483498 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="sg-core" containerID="cri-o://66c5857de613d8b019dbddce1829bc72dc907311072c6e4da310f61f81609fb2" gracePeriod=30 Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.483532 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="ceilometer-notification-agent" containerID="cri-o://82901847cb3aa2ed78edc0182eb3f35814fe46b9751b308e8ca19c7705385651" gracePeriod=30 Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.529064 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.529142 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.529170 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.529193 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-config\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.529242 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.529316 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7592\" (UniqueName: \"kubernetes.io/projected/88cb4442-05de-43fa-b0ff-bb12e23ee408-kube-api-access-l7592\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.644703 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.644809 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7592\" (UniqueName: \"kubernetes.io/projected/88cb4442-05de-43fa-b0ff-bb12e23ee408-kube-api-access-l7592\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.644873 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.644936 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.644961 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.644985 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-config\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.646046 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-config\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.646724 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.647692 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.648253 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.649033 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.668575 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7592\" (UniqueName: \"kubernetes.io/projected/88cb4442-05de-43fa-b0ff-bb12e23ee408-kube-api-access-l7592\") pod \"dnsmasq-dns-59cf4bdb65-2qjjr\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: I1002 11:49:29.781837 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:29 crc kubenswrapper[4725]: E1002 11:49:29.856253 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a466353_e216_4368_b9fb_386e37daf3b7.slice/crio-conmon-baa0522fb1dbc2354810d5394ed2e3a797c6175f8ad25c3f4eb8ffca49e39180.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a466353_e216_4368_b9fb_386e37daf3b7.slice/crio-conmon-49e147a5fd65e17db5852cae9892052998ef26d956b25bc1f14aeae713deec31.scope\": RecentStats: unable to find data in memory cache]" Oct 02 11:49:30 crc kubenswrapper[4725]: I1002 11:49:30.080812 4725 generic.go:334] "Generic (PLEG): container finished" podID="5a466353-e216-4368-b9fb-386e37daf3b7" containerID="baa0522fb1dbc2354810d5394ed2e3a797c6175f8ad25c3f4eb8ffca49e39180" exitCode=0 Oct 02 11:49:30 crc kubenswrapper[4725]: I1002 11:49:30.081010 4725 generic.go:334] "Generic (PLEG): container finished" podID="5a466353-e216-4368-b9fb-386e37daf3b7" containerID="66c5857de613d8b019dbddce1829bc72dc907311072c6e4da310f61f81609fb2" exitCode=2 Oct 02 11:49:30 crc kubenswrapper[4725]: I1002 11:49:30.081021 4725 generic.go:334] "Generic (PLEG): container finished" podID="5a466353-e216-4368-b9fb-386e37daf3b7" containerID="49e147a5fd65e17db5852cae9892052998ef26d956b25bc1f14aeae713deec31" exitCode=0 Oct 02 11:49:30 crc kubenswrapper[4725]: I1002 11:49:30.080898 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a466353-e216-4368-b9fb-386e37daf3b7","Type":"ContainerDied","Data":"baa0522fb1dbc2354810d5394ed2e3a797c6175f8ad25c3f4eb8ffca49e39180"} Oct 02 11:49:30 crc kubenswrapper[4725]: I1002 11:49:30.081082 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a466353-e216-4368-b9fb-386e37daf3b7","Type":"ContainerDied","Data":"66c5857de613d8b019dbddce1829bc72dc907311072c6e4da310f61f81609fb2"} Oct 02 11:49:30 crc kubenswrapper[4725]: I1002 11:49:30.081095 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a466353-e216-4368-b9fb-386e37daf3b7","Type":"ContainerDied","Data":"49e147a5fd65e17db5852cae9892052998ef26d956b25bc1f14aeae713deec31"} Oct 02 11:49:30 crc kubenswrapper[4725]: I1002 11:49:30.082349 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e4b7ce88-f603-426c-9af7-b2cccde7469d","Type":"ContainerStarted","Data":"3ae63eecc23e94748a79fae14b02f5b5ffe2d91d98241341b563db51390085cd"} Oct 02 11:49:30 crc kubenswrapper[4725]: I1002 11:49:30.288828 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-2qjjr"] Oct 02 11:49:30 crc kubenswrapper[4725]: W1002 11:49:30.290523 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88cb4442_05de_43fa_b0ff_bb12e23ee408.slice/crio-a8bae5cb681d5acd86517fa13efe3790c5e0ff110db1ea283270e34b44bcb70e WatchSource:0}: Error finding container a8bae5cb681d5acd86517fa13efe3790c5e0ff110db1ea283270e34b44bcb70e: Status 404 returned error can't find the container with id a8bae5cb681d5acd86517fa13efe3790c5e0ff110db1ea283270e34b44bcb70e Oct 02 11:49:30 crc kubenswrapper[4725]: I1002 11:49:30.358234 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:30 crc kubenswrapper[4725]: I1002 11:49:30.426274 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.098139 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e4b7ce88-f603-426c-9af7-b2cccde7469d","Type":"ContainerStarted","Data":"27828b206ad83fbe8651b3678ee591859b78bbf06b374454ed3d3a2a31112ac4"} Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.098272 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.101821 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" event={"ID":"88cb4442-05de-43fa-b0ff-bb12e23ee408","Type":"ContainerStarted","Data":"71bae2c29dbf3cebc0c12d77473b674e76a263faa77e11f0a02d771d4290ab68"} Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.101860 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" event={"ID":"88cb4442-05de-43fa-b0ff-bb12e23ee408","Type":"ContainerStarted","Data":"a8bae5cb681d5acd86517fa13efe3790c5e0ff110db1ea283270e34b44bcb70e"} Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.122783 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.406150121 podStartE2EDuration="3.122757785s" podCreationTimestamp="2025-10-02 11:49:28 +0000 UTC" firstStartedPulling="2025-10-02 11:49:29.240179384 +0000 UTC m=+1289.147678847" lastFinishedPulling="2025-10-02 11:49:29.956787048 +0000 UTC m=+1289.864286511" observedRunningTime="2025-10-02 11:49:31.113191592 +0000 UTC m=+1291.020691065" watchObservedRunningTime="2025-10-02 11:49:31.122757785 +0000 UTC m=+1291.030257268" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.129601 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.331537 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-29k72"] Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.332666 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.342117 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.342339 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.347348 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-29k72"] Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.376978 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.377047 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-scripts\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.377179 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bgqq\" (UniqueName: \"kubernetes.io/projected/ebe1b524-1eb2-4952-800a-9e3f70f7690e-kube-api-access-7bgqq\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.377552 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-config-data\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.479459 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-config-data\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.479574 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.479598 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-scripts\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.479698 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bgqq\" (UniqueName: \"kubernetes.io/projected/ebe1b524-1eb2-4952-800a-9e3f70f7690e-kube-api-access-7bgqq\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.484383 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-scripts\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.484660 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-config-data\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.484945 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.503443 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bgqq\" (UniqueName: \"kubernetes.io/projected/ebe1b524-1eb2-4952-800a-9e3f70f7690e-kube-api-access-7bgqq\") pod \"nova-cell1-cell-mapping-29k72\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:31 crc kubenswrapper[4725]: I1002 11:49:31.667701 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:32 crc kubenswrapper[4725]: W1002 11:49:32.130554 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebe1b524_1eb2_4952_800a_9e3f70f7690e.slice/crio-70fd7b93ef5b516454db97fdd9d0c1b674ab5f0622c008a6e1f829c14a5ea34d WatchSource:0}: Error finding container 70fd7b93ef5b516454db97fdd9d0c1b674ab5f0622c008a6e1f829c14a5ea34d: Status 404 returned error can't find the container with id 70fd7b93ef5b516454db97fdd9d0c1b674ab5f0622c008a6e1f829c14a5ea34d Oct 02 11:49:32 crc kubenswrapper[4725]: I1002 11:49:32.132980 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-29k72"] Oct 02 11:49:32 crc kubenswrapper[4725]: I1002 11:49:32.135847 4725 generic.go:334] "Generic (PLEG): container finished" podID="88cb4442-05de-43fa-b0ff-bb12e23ee408" containerID="71bae2c29dbf3cebc0c12d77473b674e76a263faa77e11f0a02d771d4290ab68" exitCode=0 Oct 02 11:49:32 crc kubenswrapper[4725]: I1002 11:49:32.136766 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" event={"ID":"88cb4442-05de-43fa-b0ff-bb12e23ee408","Type":"ContainerDied","Data":"71bae2c29dbf3cebc0c12d77473b674e76a263faa77e11f0a02d771d4290ab68"} Oct 02 11:49:32 crc kubenswrapper[4725]: I1002 11:49:32.268169 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:32 crc kubenswrapper[4725]: I1002 11:49:32.268397 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b44478a-7204-4221-a0ae-fb44e9960664" containerName="nova-api-log" containerID="cri-o://06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7" gracePeriod=30 Oct 02 11:49:32 crc kubenswrapper[4725]: I1002 11:49:32.268792 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b44478a-7204-4221-a0ae-fb44e9960664" containerName="nova-api-api" containerID="cri-o://4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15" gracePeriod=30 Oct 02 11:49:33 crc kubenswrapper[4725]: I1002 11:49:33.145622 4725 generic.go:334] "Generic (PLEG): container finished" podID="8b44478a-7204-4221-a0ae-fb44e9960664" containerID="06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7" exitCode=143 Oct 02 11:49:33 crc kubenswrapper[4725]: I1002 11:49:33.146096 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b44478a-7204-4221-a0ae-fb44e9960664","Type":"ContainerDied","Data":"06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7"} Oct 02 11:49:33 crc kubenswrapper[4725]: I1002 11:49:33.147625 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" event={"ID":"88cb4442-05de-43fa-b0ff-bb12e23ee408","Type":"ContainerStarted","Data":"7a5f78933a4b003f821b67f6b332eb4ed60867a17c06faf942bb158b05938989"} Oct 02 11:49:33 crc kubenswrapper[4725]: I1002 11:49:33.148673 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:33 crc kubenswrapper[4725]: I1002 11:49:33.150423 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29k72" event={"ID":"ebe1b524-1eb2-4952-800a-9e3f70f7690e","Type":"ContainerStarted","Data":"0104793470e16053cf855d90c43de6c2710ea42316fca074ab9a281be5a2fae7"} Oct 02 11:49:33 crc kubenswrapper[4725]: I1002 11:49:33.150448 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29k72" event={"ID":"ebe1b524-1eb2-4952-800a-9e3f70f7690e","Type":"ContainerStarted","Data":"70fd7b93ef5b516454db97fdd9d0c1b674ab5f0622c008a6e1f829c14a5ea34d"} Oct 02 11:49:33 crc kubenswrapper[4725]: I1002 11:49:33.170785 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" podStartSLOduration=4.1707638320000004 podStartE2EDuration="4.170763832s" podCreationTimestamp="2025-10-02 11:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:33.164091834 +0000 UTC m=+1293.071591297" watchObservedRunningTime="2025-10-02 11:49:33.170763832 +0000 UTC m=+1293.078263295" Oct 02 11:49:33 crc kubenswrapper[4725]: I1002 11:49:33.183844 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-29k72" podStartSLOduration=2.183816368 podStartE2EDuration="2.183816368s" podCreationTimestamp="2025-10-02 11:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:33.17975016 +0000 UTC m=+1293.087249633" watchObservedRunningTime="2025-10-02 11:49:33.183816368 +0000 UTC m=+1293.091315831" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.162934 4725 generic.go:334] "Generic (PLEG): container finished" podID="5a466353-e216-4368-b9fb-386e37daf3b7" containerID="82901847cb3aa2ed78edc0182eb3f35814fe46b9751b308e8ca19c7705385651" exitCode=0 Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.163049 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a466353-e216-4368-b9fb-386e37daf3b7","Type":"ContainerDied","Data":"82901847cb3aa2ed78edc0182eb3f35814fe46b9751b308e8ca19c7705385651"} Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.754539 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.861661 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-combined-ca-bundle\") pod \"5a466353-e216-4368-b9fb-386e37daf3b7\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.861858 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-log-httpd\") pod \"5a466353-e216-4368-b9fb-386e37daf3b7\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.861915 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lgnl\" (UniqueName: \"kubernetes.io/projected/5a466353-e216-4368-b9fb-386e37daf3b7-kube-api-access-4lgnl\") pod \"5a466353-e216-4368-b9fb-386e37daf3b7\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.862028 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-run-httpd\") pod \"5a466353-e216-4368-b9fb-386e37daf3b7\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.862098 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-sg-core-conf-yaml\") pod \"5a466353-e216-4368-b9fb-386e37daf3b7\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.862149 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-scripts\") pod \"5a466353-e216-4368-b9fb-386e37daf3b7\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.862171 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-config-data\") pod \"5a466353-e216-4368-b9fb-386e37daf3b7\" (UID: \"5a466353-e216-4368-b9fb-386e37daf3b7\") " Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.862558 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5a466353-e216-4368-b9fb-386e37daf3b7" (UID: "5a466353-e216-4368-b9fb-386e37daf3b7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.862870 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5a466353-e216-4368-b9fb-386e37daf3b7" (UID: "5a466353-e216-4368-b9fb-386e37daf3b7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.862895 4725 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.871167 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a466353-e216-4368-b9fb-386e37daf3b7-kube-api-access-4lgnl" (OuterVolumeSpecName: "kube-api-access-4lgnl") pod "5a466353-e216-4368-b9fb-386e37daf3b7" (UID: "5a466353-e216-4368-b9fb-386e37daf3b7"). InnerVolumeSpecName "kube-api-access-4lgnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.890514 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-scripts" (OuterVolumeSpecName: "scripts") pod "5a466353-e216-4368-b9fb-386e37daf3b7" (UID: "5a466353-e216-4368-b9fb-386e37daf3b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.905567 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5a466353-e216-4368-b9fb-386e37daf3b7" (UID: "5a466353-e216-4368-b9fb-386e37daf3b7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.952153 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a466353-e216-4368-b9fb-386e37daf3b7" (UID: "5a466353-e216-4368-b9fb-386e37daf3b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.964910 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.964964 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lgnl\" (UniqueName: \"kubernetes.io/projected/5a466353-e216-4368-b9fb-386e37daf3b7-kube-api-access-4lgnl\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.964978 4725 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a466353-e216-4368-b9fb-386e37daf3b7-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.964988 4725 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.964998 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:34 crc kubenswrapper[4725]: I1002 11:49:34.979794 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-config-data" (OuterVolumeSpecName: "config-data") pod "5a466353-e216-4368-b9fb-386e37daf3b7" (UID: "5a466353-e216-4368-b9fb-386e37daf3b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.067581 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a466353-e216-4368-b9fb-386e37daf3b7-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.175212 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a466353-e216-4368-b9fb-386e37daf3b7","Type":"ContainerDied","Data":"77cf43219b405d6af3a35049e1b43d8c63ba20200c017de278757c7959783269"} Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.175316 4725 scope.go:117] "RemoveContainer" containerID="baa0522fb1dbc2354810d5394ed2e3a797c6175f8ad25c3f4eb8ffca49e39180" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.175550 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.199341 4725 scope.go:117] "RemoveContainer" containerID="66c5857de613d8b019dbddce1829bc72dc907311072c6e4da310f61f81609fb2" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.215667 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.235973 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.238889 4725 scope.go:117] "RemoveContainer" containerID="82901847cb3aa2ed78edc0182eb3f35814fe46b9751b308e8ca19c7705385651" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.244594 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:49:35 crc kubenswrapper[4725]: E1002 11:49:35.245165 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="ceilometer-central-agent" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.245195 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="ceilometer-central-agent" Oct 02 11:49:35 crc kubenswrapper[4725]: E1002 11:49:35.245219 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="ceilometer-notification-agent" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.245228 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="ceilometer-notification-agent" Oct 02 11:49:35 crc kubenswrapper[4725]: E1002 11:49:35.245243 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="sg-core" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.245251 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="sg-core" Oct 02 11:49:35 crc kubenswrapper[4725]: E1002 11:49:35.245287 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="proxy-httpd" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.245295 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="proxy-httpd" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.245509 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="ceilometer-central-agent" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.245543 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="ceilometer-notification-agent" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.245564 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="sg-core" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.245579 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" containerName="proxy-httpd" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.247615 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.250325 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.251528 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.251778 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.254970 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.318899 4725 scope.go:117] "RemoveContainer" containerID="49e147a5fd65e17db5852cae9892052998ef26d956b25bc1f14aeae713deec31" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.345351 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a466353-e216-4368-b9fb-386e37daf3b7" path="/var/lib/kubelet/pods/5a466353-e216-4368-b9fb-386e37daf3b7/volumes" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.375790 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.375840 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-scripts\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.375929 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-log-httpd\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.375959 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvnr5\" (UniqueName: \"kubernetes.io/projected/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-kube-api-access-hvnr5\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.375987 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.376007 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.376043 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-config-data\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.376095 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-run-httpd\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.477869 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-config-data\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.477931 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-run-httpd\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.477992 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.478013 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-scripts\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.478061 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-log-httpd\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.478087 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvnr5\" (UniqueName: \"kubernetes.io/projected/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-kube-api-access-hvnr5\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.478112 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.478129 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.478450 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-run-httpd\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.482997 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-log-httpd\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.483616 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.495630 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.497927 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.499281 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-scripts\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.500045 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-config-data\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.531438 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvnr5\" (UniqueName: \"kubernetes.io/projected/9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930-kube-api-access-hvnr5\") pod \"ceilometer-0\" (UID: \"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930\") " pod="openstack/ceilometer-0" Oct 02 11:49:35 crc kubenswrapper[4725]: I1002 11:49:35.569011 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.027077 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.148202 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.184525 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930","Type":"ContainerStarted","Data":"10fea4d9c49dec7cb7a8993a438f695f48426b1543f6eae66d2987ca5fc2933d"} Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.188496 4725 generic.go:334] "Generic (PLEG): container finished" podID="8b44478a-7204-4221-a0ae-fb44e9960664" containerID="4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15" exitCode=0 Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.188531 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b44478a-7204-4221-a0ae-fb44e9960664","Type":"ContainerDied","Data":"4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15"} Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.188553 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b44478a-7204-4221-a0ae-fb44e9960664","Type":"ContainerDied","Data":"087d14884040bfea93bfdfc60480ba6ab66246a269cca2c5cd68564543c20eaa"} Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.188570 4725 scope.go:117] "RemoveContainer" containerID="4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.188711 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.216863 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-combined-ca-bundle\") pod \"8b44478a-7204-4221-a0ae-fb44e9960664\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.217022 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b44478a-7204-4221-a0ae-fb44e9960664-logs\") pod \"8b44478a-7204-4221-a0ae-fb44e9960664\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.217062 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzlx4\" (UniqueName: \"kubernetes.io/projected/8b44478a-7204-4221-a0ae-fb44e9960664-kube-api-access-dzlx4\") pod \"8b44478a-7204-4221-a0ae-fb44e9960664\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.217099 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-config-data\") pod \"8b44478a-7204-4221-a0ae-fb44e9960664\" (UID: \"8b44478a-7204-4221-a0ae-fb44e9960664\") " Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.217942 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b44478a-7204-4221-a0ae-fb44e9960664-logs" (OuterVolumeSpecName: "logs") pod "8b44478a-7204-4221-a0ae-fb44e9960664" (UID: "8b44478a-7204-4221-a0ae-fb44e9960664"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.231036 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b44478a-7204-4221-a0ae-fb44e9960664-kube-api-access-dzlx4" (OuterVolumeSpecName: "kube-api-access-dzlx4") pod "8b44478a-7204-4221-a0ae-fb44e9960664" (UID: "8b44478a-7204-4221-a0ae-fb44e9960664"). InnerVolumeSpecName "kube-api-access-dzlx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.236543 4725 scope.go:117] "RemoveContainer" containerID="06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.268299 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-config-data" (OuterVolumeSpecName: "config-data") pod "8b44478a-7204-4221-a0ae-fb44e9960664" (UID: "8b44478a-7204-4221-a0ae-fb44e9960664"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.292067 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b44478a-7204-4221-a0ae-fb44e9960664" (UID: "8b44478a-7204-4221-a0ae-fb44e9960664"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.323015 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.326589 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b44478a-7204-4221-a0ae-fb44e9960664-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.326895 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b44478a-7204-4221-a0ae-fb44e9960664-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.327027 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzlx4\" (UniqueName: \"kubernetes.io/projected/8b44478a-7204-4221-a0ae-fb44e9960664-kube-api-access-dzlx4\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.349118 4725 scope.go:117] "RemoveContainer" containerID="4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15" Oct 02 11:49:36 crc kubenswrapper[4725]: E1002 11:49:36.351237 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15\": container with ID starting with 4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15 not found: ID does not exist" containerID="4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.351384 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15"} err="failed to get container status \"4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15\": rpc error: code = NotFound desc = could not find container \"4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15\": container with ID starting with 4c0829958b22e3d5674b7de678f90a91eacc2ce1e5c0e63196dda8637baeba15 not found: ID does not exist" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.351487 4725 scope.go:117] "RemoveContainer" containerID="06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7" Oct 02 11:49:36 crc kubenswrapper[4725]: E1002 11:49:36.351900 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7\": container with ID starting with 06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7 not found: ID does not exist" containerID="06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.352236 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7"} err="failed to get container status \"06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7\": rpc error: code = NotFound desc = could not find container \"06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7\": container with ID starting with 06483eeda1b2fee35ee89c21a877d5a3c4901418d501b11898de5198c72188e7 not found: ID does not exist" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.524604 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.536805 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.550321 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:36 crc kubenswrapper[4725]: E1002 11:49:36.550784 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b44478a-7204-4221-a0ae-fb44e9960664" containerName="nova-api-log" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.550801 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b44478a-7204-4221-a0ae-fb44e9960664" containerName="nova-api-log" Oct 02 11:49:36 crc kubenswrapper[4725]: E1002 11:49:36.550809 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b44478a-7204-4221-a0ae-fb44e9960664" containerName="nova-api-api" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.550816 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b44478a-7204-4221-a0ae-fb44e9960664" containerName="nova-api-api" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.551007 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b44478a-7204-4221-a0ae-fb44e9960664" containerName="nova-api-api" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.551023 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b44478a-7204-4221-a0ae-fb44e9960664" containerName="nova-api-log" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.552017 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.553709 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.554005 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.555407 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.559374 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.633273 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55fcj\" (UniqueName: \"kubernetes.io/projected/4263b652-1614-401f-9496-2cea11fe5ee2-kube-api-access-55fcj\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.633321 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-config-data\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.633577 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4263b652-1614-401f-9496-2cea11fe5ee2-logs\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.633659 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-public-tls-certs\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.633697 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.633849 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.735584 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4263b652-1614-401f-9496-2cea11fe5ee2-logs\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.735644 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-public-tls-certs\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.735674 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.735745 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.735829 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55fcj\" (UniqueName: \"kubernetes.io/projected/4263b652-1614-401f-9496-2cea11fe5ee2-kube-api-access-55fcj\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.735866 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-config-data\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.737073 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4263b652-1614-401f-9496-2cea11fe5ee2-logs\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.740594 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-public-tls-certs\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.740635 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-config-data\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.740946 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.741338 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.754909 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55fcj\" (UniqueName: \"kubernetes.io/projected/4263b652-1614-401f-9496-2cea11fe5ee2-kube-api-access-55fcj\") pod \"nova-api-0\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " pod="openstack/nova-api-0" Oct 02 11:49:36 crc kubenswrapper[4725]: I1002 11:49:36.872861 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:37 crc kubenswrapper[4725]: I1002 11:49:37.284371 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b44478a-7204-4221-a0ae-fb44e9960664" path="/var/lib/kubelet/pods/8b44478a-7204-4221-a0ae-fb44e9960664/volumes" Oct 02 11:49:37 crc kubenswrapper[4725]: I1002 11:49:37.381839 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:37 crc kubenswrapper[4725]: W1002 11:49:37.393947 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4263b652_1614_401f_9496_2cea11fe5ee2.slice/crio-d7a207823becd0fc7a3cd6bcdf04e52e00f4bdbedf41828d010d742eb42fd0d2 WatchSource:0}: Error finding container d7a207823becd0fc7a3cd6bcdf04e52e00f4bdbedf41828d010d742eb42fd0d2: Status 404 returned error can't find the container with id d7a207823becd0fc7a3cd6bcdf04e52e00f4bdbedf41828d010d742eb42fd0d2 Oct 02 11:49:38 crc kubenswrapper[4725]: I1002 11:49:38.207497 4725 generic.go:334] "Generic (PLEG): container finished" podID="ebe1b524-1eb2-4952-800a-9e3f70f7690e" containerID="0104793470e16053cf855d90c43de6c2710ea42316fca074ab9a281be5a2fae7" exitCode=0 Oct 02 11:49:38 crc kubenswrapper[4725]: I1002 11:49:38.207608 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29k72" event={"ID":"ebe1b524-1eb2-4952-800a-9e3f70f7690e","Type":"ContainerDied","Data":"0104793470e16053cf855d90c43de6c2710ea42316fca074ab9a281be5a2fae7"} Oct 02 11:49:38 crc kubenswrapper[4725]: I1002 11:49:38.211754 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4263b652-1614-401f-9496-2cea11fe5ee2","Type":"ContainerStarted","Data":"e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d"} Oct 02 11:49:38 crc kubenswrapper[4725]: I1002 11:49:38.211779 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4263b652-1614-401f-9496-2cea11fe5ee2","Type":"ContainerStarted","Data":"d7a207823becd0fc7a3cd6bcdf04e52e00f4bdbedf41828d010d742eb42fd0d2"} Oct 02 11:49:38 crc kubenswrapper[4725]: I1002 11:49:38.213081 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930","Type":"ContainerStarted","Data":"cbfc10966f14b298825f3fa0ef7b483d0a6dda77ef81a06a47008951159c25bb"} Oct 02 11:49:38 crc kubenswrapper[4725]: I1002 11:49:38.775074 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.226862 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4263b652-1614-401f-9496-2cea11fe5ee2","Type":"ContainerStarted","Data":"fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e"} Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.259842 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.259820756 podStartE2EDuration="3.259820756s" podCreationTimestamp="2025-10-02 11:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:39.255554992 +0000 UTC m=+1299.163054465" watchObservedRunningTime="2025-10-02 11:49:39.259820756 +0000 UTC m=+1299.167320219" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.644191 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.688849 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-config-data\") pod \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.688924 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bgqq\" (UniqueName: \"kubernetes.io/projected/ebe1b524-1eb2-4952-800a-9e3f70f7690e-kube-api-access-7bgqq\") pod \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.688975 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-scripts\") pod \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.689129 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-combined-ca-bundle\") pod \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\" (UID: \"ebe1b524-1eb2-4952-800a-9e3f70f7690e\") " Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.696048 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebe1b524-1eb2-4952-800a-9e3f70f7690e-kube-api-access-7bgqq" (OuterVolumeSpecName: "kube-api-access-7bgqq") pod "ebe1b524-1eb2-4952-800a-9e3f70f7690e" (UID: "ebe1b524-1eb2-4952-800a-9e3f70f7690e"). InnerVolumeSpecName "kube-api-access-7bgqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.700856 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-scripts" (OuterVolumeSpecName: "scripts") pod "ebe1b524-1eb2-4952-800a-9e3f70f7690e" (UID: "ebe1b524-1eb2-4952-800a-9e3f70f7690e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.718922 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-config-data" (OuterVolumeSpecName: "config-data") pod "ebe1b524-1eb2-4952-800a-9e3f70f7690e" (UID: "ebe1b524-1eb2-4952-800a-9e3f70f7690e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.725845 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebe1b524-1eb2-4952-800a-9e3f70f7690e" (UID: "ebe1b524-1eb2-4952-800a-9e3f70f7690e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.783909 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.791428 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.791471 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bgqq\" (UniqueName: \"kubernetes.io/projected/ebe1b524-1eb2-4952-800a-9e3f70f7690e-kube-api-access-7bgqq\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.791488 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.791500 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe1b524-1eb2-4952-800a-9e3f70f7690e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.858443 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-r9lvl"] Oct 02 11:49:39 crc kubenswrapper[4725]: I1002 11:49:39.858673 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" podUID="d1e81b79-e75e-4647-985f-5ac5395faf7f" containerName="dnsmasq-dns" containerID="cri-o://185ba311d68d38712c9ac15f58a6739a9179be4fdb31f0a351672ba145a13f23" gracePeriod=10 Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.241999 4725 generic.go:334] "Generic (PLEG): container finished" podID="d1e81b79-e75e-4647-985f-5ac5395faf7f" containerID="185ba311d68d38712c9ac15f58a6739a9179be4fdb31f0a351672ba145a13f23" exitCode=0 Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.242162 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" event={"ID":"d1e81b79-e75e-4647-985f-5ac5395faf7f","Type":"ContainerDied","Data":"185ba311d68d38712c9ac15f58a6739a9179be4fdb31f0a351672ba145a13f23"} Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.251457 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29k72" event={"ID":"ebe1b524-1eb2-4952-800a-9e3f70f7690e","Type":"ContainerDied","Data":"70fd7b93ef5b516454db97fdd9d0c1b674ab5f0622c008a6e1f829c14a5ea34d"} Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.251495 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70fd7b93ef5b516454db97fdd9d0c1b674ab5f0622c008a6e1f829c14a5ea34d" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.251542 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29k72" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.255950 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930","Type":"ContainerStarted","Data":"e7ac6785814ad01888e039ae06faed839e89b9670d2434c791fe6263ccd0543d"} Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.396047 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.429697 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.429934 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="130b74c4-570d-4e65-a6cb-3b295d5caeae" containerName="nova-scheduler-scheduler" containerID="cri-o://2f676c00bffbc8e8ed8bb4bbace441ee23bc25af0f57fa25630300a00d945ce8" gracePeriod=30 Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.448884 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.450184 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-log" containerID="cri-o://bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f" gracePeriod=30 Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.451439 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-metadata" containerID="cri-o://8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6" gracePeriod=30 Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.530591 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.617202 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-sb\") pod \"d1e81b79-e75e-4647-985f-5ac5395faf7f\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.617310 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-swift-storage-0\") pod \"d1e81b79-e75e-4647-985f-5ac5395faf7f\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.617368 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wgr9\" (UniqueName: \"kubernetes.io/projected/d1e81b79-e75e-4647-985f-5ac5395faf7f-kube-api-access-7wgr9\") pod \"d1e81b79-e75e-4647-985f-5ac5395faf7f\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.617437 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-config\") pod \"d1e81b79-e75e-4647-985f-5ac5395faf7f\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.617556 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-nb\") pod \"d1e81b79-e75e-4647-985f-5ac5395faf7f\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.617603 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-svc\") pod \"d1e81b79-e75e-4647-985f-5ac5395faf7f\" (UID: \"d1e81b79-e75e-4647-985f-5ac5395faf7f\") " Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.633898 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e81b79-e75e-4647-985f-5ac5395faf7f-kube-api-access-7wgr9" (OuterVolumeSpecName: "kube-api-access-7wgr9") pod "d1e81b79-e75e-4647-985f-5ac5395faf7f" (UID: "d1e81b79-e75e-4647-985f-5ac5395faf7f"). InnerVolumeSpecName "kube-api-access-7wgr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.676613 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1e81b79-e75e-4647-985f-5ac5395faf7f" (UID: "d1e81b79-e75e-4647-985f-5ac5395faf7f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.679672 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1e81b79-e75e-4647-985f-5ac5395faf7f" (UID: "d1e81b79-e75e-4647-985f-5ac5395faf7f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.693630 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1e81b79-e75e-4647-985f-5ac5395faf7f" (UID: "d1e81b79-e75e-4647-985f-5ac5395faf7f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.708171 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-config" (OuterVolumeSpecName: "config") pod "d1e81b79-e75e-4647-985f-5ac5395faf7f" (UID: "d1e81b79-e75e-4647-985f-5ac5395faf7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.716816 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d1e81b79-e75e-4647-985f-5ac5395faf7f" (UID: "d1e81b79-e75e-4647-985f-5ac5395faf7f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.719584 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.719618 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.719632 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wgr9\" (UniqueName: \"kubernetes.io/projected/d1e81b79-e75e-4647-985f-5ac5395faf7f-kube-api-access-7wgr9\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.719645 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.719656 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:40 crc kubenswrapper[4725]: I1002 11:49:40.719666 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1e81b79-e75e-4647-985f-5ac5395faf7f-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.268023 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.269927 4725 generic.go:334] "Generic (PLEG): container finished" podID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerID="bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f" exitCode=143 Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.277433 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4263b652-1614-401f-9496-2cea11fe5ee2" containerName="nova-api-log" containerID="cri-o://e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d" gracePeriod=30 Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.277503 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4263b652-1614-401f-9496-2cea11fe5ee2" containerName="nova-api-api" containerID="cri-o://fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e" gracePeriod=30 Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.284281 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-r9lvl" event={"ID":"d1e81b79-e75e-4647-985f-5ac5395faf7f","Type":"ContainerDied","Data":"b6c258d633883bc6953337f588b064465df0a5f5cb52dbd01b1a95dd66e7f60a"} Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.284339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3","Type":"ContainerDied","Data":"bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f"} Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.284356 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930","Type":"ContainerStarted","Data":"67bf8edc3137a4571fbc8ac57ff9444586f1f112263cdb9791ef30cf914cea4e"} Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.284381 4725 scope.go:117] "RemoveContainer" containerID="185ba311d68d38712c9ac15f58a6739a9179be4fdb31f0a351672ba145a13f23" Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.340375 4725 scope.go:117] "RemoveContainer" containerID="2b4073dfa09153fa5f84647e792316f784d5c780044c7797e6dd267ccd1710db" Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.349865 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-r9lvl"] Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.358169 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-r9lvl"] Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.907690 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.941533 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-internal-tls-certs\") pod \"4263b652-1614-401f-9496-2cea11fe5ee2\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.941614 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4263b652-1614-401f-9496-2cea11fe5ee2-logs\") pod \"4263b652-1614-401f-9496-2cea11fe5ee2\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.941642 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55fcj\" (UniqueName: \"kubernetes.io/projected/4263b652-1614-401f-9496-2cea11fe5ee2-kube-api-access-55fcj\") pod \"4263b652-1614-401f-9496-2cea11fe5ee2\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.941749 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-combined-ca-bundle\") pod \"4263b652-1614-401f-9496-2cea11fe5ee2\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.941859 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-public-tls-certs\") pod \"4263b652-1614-401f-9496-2cea11fe5ee2\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.941900 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-config-data\") pod \"4263b652-1614-401f-9496-2cea11fe5ee2\" (UID: \"4263b652-1614-401f-9496-2cea11fe5ee2\") " Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.942300 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4263b652-1614-401f-9496-2cea11fe5ee2-logs" (OuterVolumeSpecName: "logs") pod "4263b652-1614-401f-9496-2cea11fe5ee2" (UID: "4263b652-1614-401f-9496-2cea11fe5ee2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.942514 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4263b652-1614-401f-9496-2cea11fe5ee2-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.947069 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4263b652-1614-401f-9496-2cea11fe5ee2-kube-api-access-55fcj" (OuterVolumeSpecName: "kube-api-access-55fcj") pod "4263b652-1614-401f-9496-2cea11fe5ee2" (UID: "4263b652-1614-401f-9496-2cea11fe5ee2"). InnerVolumeSpecName "kube-api-access-55fcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.973939 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-config-data" (OuterVolumeSpecName: "config-data") pod "4263b652-1614-401f-9496-2cea11fe5ee2" (UID: "4263b652-1614-401f-9496-2cea11fe5ee2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.978247 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4263b652-1614-401f-9496-2cea11fe5ee2" (UID: "4263b652-1614-401f-9496-2cea11fe5ee2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:41 crc kubenswrapper[4725]: I1002 11:49:41.997838 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4263b652-1614-401f-9496-2cea11fe5ee2" (UID: "4263b652-1614-401f-9496-2cea11fe5ee2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.001099 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4263b652-1614-401f-9496-2cea11fe5ee2" (UID: "4263b652-1614-401f-9496-2cea11fe5ee2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.044141 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.044184 4725 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.044198 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.044210 4725 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4263b652-1614-401f-9496-2cea11fe5ee2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.044250 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55fcj\" (UniqueName: \"kubernetes.io/projected/4263b652-1614-401f-9496-2cea11fe5ee2-kube-api-access-55fcj\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.281583 4725 generic.go:334] "Generic (PLEG): container finished" podID="4263b652-1614-401f-9496-2cea11fe5ee2" containerID="fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e" exitCode=0 Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.281614 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.281623 4725 generic.go:334] "Generic (PLEG): container finished" podID="4263b652-1614-401f-9496-2cea11fe5ee2" containerID="e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d" exitCode=143 Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.281819 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4263b652-1614-401f-9496-2cea11fe5ee2","Type":"ContainerDied","Data":"fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e"} Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.281874 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4263b652-1614-401f-9496-2cea11fe5ee2","Type":"ContainerDied","Data":"e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d"} Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.281892 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4263b652-1614-401f-9496-2cea11fe5ee2","Type":"ContainerDied","Data":"d7a207823becd0fc7a3cd6bcdf04e52e00f4bdbedf41828d010d742eb42fd0d2"} Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.281911 4725 scope.go:117] "RemoveContainer" containerID="fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.306542 4725 scope.go:117] "RemoveContainer" containerID="e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.329383 4725 scope.go:117] "RemoveContainer" containerID="fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e" Oct 02 11:49:42 crc kubenswrapper[4725]: E1002 11:49:42.329925 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e\": container with ID starting with fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e not found: ID does not exist" containerID="fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.329964 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e"} err="failed to get container status \"fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e\": rpc error: code = NotFound desc = could not find container \"fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e\": container with ID starting with fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e not found: ID does not exist" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.329987 4725 scope.go:117] "RemoveContainer" containerID="e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d" Oct 02 11:49:42 crc kubenswrapper[4725]: E1002 11:49:42.330247 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d\": container with ID starting with e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d not found: ID does not exist" containerID="e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.330266 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d"} err="failed to get container status \"e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d\": rpc error: code = NotFound desc = could not find container \"e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d\": container with ID starting with e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d not found: ID does not exist" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.330278 4725 scope.go:117] "RemoveContainer" containerID="fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.330479 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e"} err="failed to get container status \"fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e\": rpc error: code = NotFound desc = could not find container \"fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e\": container with ID starting with fcf948ffcbfffccd67d525fe725b104dfb792ef742778e8bd6d0daeda56f604e not found: ID does not exist" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.330496 4725 scope.go:117] "RemoveContainer" containerID="e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.330739 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d"} err="failed to get container status \"e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d\": rpc error: code = NotFound desc = could not find container \"e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d\": container with ID starting with e955d8c9f83ffa0f89085757dd54fbdc38dfbc9255bdca7c20dc223e19546e5d not found: ID does not exist" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.331485 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.349865 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:42 crc kubenswrapper[4725]: E1002 11:49:42.352336 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f676c00bffbc8e8ed8bb4bbace441ee23bc25af0f57fa25630300a00d945ce8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 11:49:42 crc kubenswrapper[4725]: E1002 11:49:42.353641 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f676c00bffbc8e8ed8bb4bbace441ee23bc25af0f57fa25630300a00d945ce8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 11:49:42 crc kubenswrapper[4725]: E1002 11:49:42.359533 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f676c00bffbc8e8ed8bb4bbace441ee23bc25af0f57fa25630300a00d945ce8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 11:49:42 crc kubenswrapper[4725]: E1002 11:49:42.359617 4725 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="130b74c4-570d-4e65-a6cb-3b295d5caeae" containerName="nova-scheduler-scheduler" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.373947 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:42 crc kubenswrapper[4725]: E1002 11:49:42.374322 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebe1b524-1eb2-4952-800a-9e3f70f7690e" containerName="nova-manage" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.374340 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebe1b524-1eb2-4952-800a-9e3f70f7690e" containerName="nova-manage" Oct 02 11:49:42 crc kubenswrapper[4725]: E1002 11:49:42.374359 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e81b79-e75e-4647-985f-5ac5395faf7f" containerName="dnsmasq-dns" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.374365 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e81b79-e75e-4647-985f-5ac5395faf7f" containerName="dnsmasq-dns" Oct 02 11:49:42 crc kubenswrapper[4725]: E1002 11:49:42.374377 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4263b652-1614-401f-9496-2cea11fe5ee2" containerName="nova-api-log" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.374383 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4263b652-1614-401f-9496-2cea11fe5ee2" containerName="nova-api-log" Oct 02 11:49:42 crc kubenswrapper[4725]: E1002 11:49:42.374407 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4263b652-1614-401f-9496-2cea11fe5ee2" containerName="nova-api-api" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.374413 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4263b652-1614-401f-9496-2cea11fe5ee2" containerName="nova-api-api" Oct 02 11:49:42 crc kubenswrapper[4725]: E1002 11:49:42.374430 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e81b79-e75e-4647-985f-5ac5395faf7f" containerName="init" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.374437 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e81b79-e75e-4647-985f-5ac5395faf7f" containerName="init" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.374594 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1e81b79-e75e-4647-985f-5ac5395faf7f" containerName="dnsmasq-dns" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.374603 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebe1b524-1eb2-4952-800a-9e3f70f7690e" containerName="nova-manage" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.374628 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4263b652-1614-401f-9496-2cea11fe5ee2" containerName="nova-api-log" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.374639 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4263b652-1614-401f-9496-2cea11fe5ee2" containerName="nova-api-api" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.375640 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.378044 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.378239 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.379419 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.388997 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.449472 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91849e10-6e8e-466f-a603-1c15622941c6-logs\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.449913 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-public-tls-certs\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.450098 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-config-data\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.450279 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mn65\" (UniqueName: \"kubernetes.io/projected/91849e10-6e8e-466f-a603-1c15622941c6-kube-api-access-8mn65\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.450474 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.452548 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.554276 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.554611 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.554713 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91849e10-6e8e-466f-a603-1c15622941c6-logs\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.554805 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-public-tls-certs\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.554938 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-config-data\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.555284 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mn65\" (UniqueName: \"kubernetes.io/projected/91849e10-6e8e-466f-a603-1c15622941c6-kube-api-access-8mn65\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.556420 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91849e10-6e8e-466f-a603-1c15622941c6-logs\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.559488 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.559564 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.573377 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-public-tls-certs\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.574008 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91849e10-6e8e-466f-a603-1c15622941c6-config-data\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.581375 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mn65\" (UniqueName: \"kubernetes.io/projected/91849e10-6e8e-466f-a603-1c15622941c6-kube-api-access-8mn65\") pod \"nova-api-0\" (UID: \"91849e10-6e8e-466f-a603-1c15622941c6\") " pod="openstack/nova-api-0" Oct 02 11:49:42 crc kubenswrapper[4725]: I1002 11:49:42.698710 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 11:49:43 crc kubenswrapper[4725]: I1002 11:49:43.218206 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 11:49:43 crc kubenswrapper[4725]: W1002 11:49:43.222923 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91849e10_6e8e_466f_a603_1c15622941c6.slice/crio-9f92a12faf9ecb4a6c053f93e2ab3d59141d6bef0c0ed8bc8295fa4228cb5d63 WatchSource:0}: Error finding container 9f92a12faf9ecb4a6c053f93e2ab3d59141d6bef0c0ed8bc8295fa4228cb5d63: Status 404 returned error can't find the container with id 9f92a12faf9ecb4a6c053f93e2ab3d59141d6bef0c0ed8bc8295fa4228cb5d63 Oct 02 11:49:43 crc kubenswrapper[4725]: I1002 11:49:43.281943 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4263b652-1614-401f-9496-2cea11fe5ee2" path="/var/lib/kubelet/pods/4263b652-1614-401f-9496-2cea11fe5ee2/volumes" Oct 02 11:49:43 crc kubenswrapper[4725]: I1002 11:49:43.283197 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e81b79-e75e-4647-985f-5ac5395faf7f" path="/var/lib/kubelet/pods/d1e81b79-e75e-4647-985f-5ac5395faf7f/volumes" Oct 02 11:49:43 crc kubenswrapper[4725]: I1002 11:49:43.303444 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91849e10-6e8e-466f-a603-1c15622941c6","Type":"ContainerStarted","Data":"9f92a12faf9ecb4a6c053f93e2ab3d59141d6bef0c0ed8bc8295fa4228cb5d63"} Oct 02 11:49:43 crc kubenswrapper[4725]: I1002 11:49:43.314422 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930","Type":"ContainerStarted","Data":"af8d9dbc574b21815a20af9ca5ff659630c426020a85cb2f135b4a3361bde3d4"} Oct 02 11:49:43 crc kubenswrapper[4725]: I1002 11:49:43.314525 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 11:49:43 crc kubenswrapper[4725]: I1002 11:49:43.335027 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7665725110000001 podStartE2EDuration="8.335009248s" podCreationTimestamp="2025-10-02 11:49:35 +0000 UTC" firstStartedPulling="2025-10-02 11:49:36.038979429 +0000 UTC m=+1295.946478892" lastFinishedPulling="2025-10-02 11:49:42.607416166 +0000 UTC m=+1302.514915629" observedRunningTime="2025-10-02 11:49:43.333307473 +0000 UTC m=+1303.240806936" watchObservedRunningTime="2025-10-02 11:49:43.335009248 +0000 UTC m=+1303.242508711" Oct 02 11:49:43 crc kubenswrapper[4725]: I1002 11:49:43.593992 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:47726->10.217.0.195:8775: read: connection reset by peer" Oct 02 11:49:43 crc kubenswrapper[4725]: I1002 11:49:43.594050 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:47722->10.217.0.195:8775: read: connection reset by peer" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.092460 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.202505 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-config-data\") pod \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.202765 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-logs\") pod \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.202833 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ljf9\" (UniqueName: \"kubernetes.io/projected/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-kube-api-access-9ljf9\") pod \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.202902 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-nova-metadata-tls-certs\") pod \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.203044 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-combined-ca-bundle\") pod \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\" (UID: \"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3\") " Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.203195 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-logs" (OuterVolumeSpecName: "logs") pod "c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" (UID: "c380fcbf-ae8b-46ae-ba85-e0c55a190ae3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.203618 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-logs\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.213085 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-kube-api-access-9ljf9" (OuterVolumeSpecName: "kube-api-access-9ljf9") pod "c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" (UID: "c380fcbf-ae8b-46ae-ba85-e0c55a190ae3"). InnerVolumeSpecName "kube-api-access-9ljf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.250526 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" (UID: "c380fcbf-ae8b-46ae-ba85-e0c55a190ae3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.258674 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-config-data" (OuterVolumeSpecName: "config-data") pod "c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" (UID: "c380fcbf-ae8b-46ae-ba85-e0c55a190ae3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.309271 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ljf9\" (UniqueName: \"kubernetes.io/projected/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-kube-api-access-9ljf9\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.309310 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.309323 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.316908 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" (UID: "c380fcbf-ae8b-46ae-ba85-e0c55a190ae3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.332919 4725 generic.go:334] "Generic (PLEG): container finished" podID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerID="8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6" exitCode=0 Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.333032 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.333459 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3","Type":"ContainerDied","Data":"8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6"} Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.333584 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c380fcbf-ae8b-46ae-ba85-e0c55a190ae3","Type":"ContainerDied","Data":"81fc9280d5540eed2a0111224784a16ef23e8d223b19905bb4d26444f27606b9"} Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.333655 4725 scope.go:117] "RemoveContainer" containerID="8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.337818 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91849e10-6e8e-466f-a603-1c15622941c6","Type":"ContainerStarted","Data":"9add21db483d9cbc496054a8b8b75ef6758b7cbbe579c502f3fae4efc6eac699"} Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.337864 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91849e10-6e8e-466f-a603-1c15622941c6","Type":"ContainerStarted","Data":"b309663415f6c42f5a99a201e12985aa0126e573b5aeac40025de83587898257"} Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.356959 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.356941332 podStartE2EDuration="2.356941332s" podCreationTimestamp="2025-10-02 11:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:44.356439999 +0000 UTC m=+1304.263939462" watchObservedRunningTime="2025-10-02 11:49:44.356941332 +0000 UTC m=+1304.264440795" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.394161 4725 scope.go:117] "RemoveContainer" containerID="bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.399551 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.410578 4725 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.414018 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.423983 4725 scope.go:117] "RemoveContainer" containerID="8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6" Oct 02 11:49:44 crc kubenswrapper[4725]: E1002 11:49:44.424471 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6\": container with ID starting with 8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6 not found: ID does not exist" containerID="8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.424513 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6"} err="failed to get container status \"8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6\": rpc error: code = NotFound desc = could not find container \"8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6\": container with ID starting with 8e8102d66369d7d2242424efef85ea9647d3199ef3b2b36280692c859cb0edc6 not found: ID does not exist" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.424541 4725 scope.go:117] "RemoveContainer" containerID="bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f" Oct 02 11:49:44 crc kubenswrapper[4725]: E1002 11:49:44.425169 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f\": container with ID starting with bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f not found: ID does not exist" containerID="bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.425211 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f"} err="failed to get container status \"bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f\": rpc error: code = NotFound desc = could not find container \"bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f\": container with ID starting with bf9f731bd51076cb40cd46a37afd3df696af2d53656d022907f9cdb7e3ee134f not found: ID does not exist" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.425781 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:49:44 crc kubenswrapper[4725]: E1002 11:49:44.426199 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-log" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.426214 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-log" Oct 02 11:49:44 crc kubenswrapper[4725]: E1002 11:49:44.426227 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-metadata" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.426234 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-metadata" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.426406 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-log" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.426430 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" containerName="nova-metadata-metadata" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.427524 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.431309 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.431451 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.443182 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.512310 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5ed584-4418-4447-8ed6-2e89c70e903b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.512475 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsvsx\" (UniqueName: \"kubernetes.io/projected/be5ed584-4418-4447-8ed6-2e89c70e903b-kube-api-access-vsvsx\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.512682 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be5ed584-4418-4447-8ed6-2e89c70e903b-logs\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.512889 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5ed584-4418-4447-8ed6-2e89c70e903b-config-data\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.512999 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5ed584-4418-4447-8ed6-2e89c70e903b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.615003 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5ed584-4418-4447-8ed6-2e89c70e903b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.615100 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsvsx\" (UniqueName: \"kubernetes.io/projected/be5ed584-4418-4447-8ed6-2e89c70e903b-kube-api-access-vsvsx\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.615165 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be5ed584-4418-4447-8ed6-2e89c70e903b-logs\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.615240 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5ed584-4418-4447-8ed6-2e89c70e903b-config-data\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.615287 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5ed584-4418-4447-8ed6-2e89c70e903b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.615659 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be5ed584-4418-4447-8ed6-2e89c70e903b-logs\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.619453 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5ed584-4418-4447-8ed6-2e89c70e903b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.620524 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5ed584-4418-4447-8ed6-2e89c70e903b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.620529 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5ed584-4418-4447-8ed6-2e89c70e903b-config-data\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.632104 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsvsx\" (UniqueName: \"kubernetes.io/projected/be5ed584-4418-4447-8ed6-2e89c70e903b-kube-api-access-vsvsx\") pod \"nova-metadata-0\" (UID: \"be5ed584-4418-4447-8ed6-2e89c70e903b\") " pod="openstack/nova-metadata-0" Oct 02 11:49:44 crc kubenswrapper[4725]: I1002 11:49:44.752415 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 11:49:45 crc kubenswrapper[4725]: I1002 11:49:45.227095 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 11:49:45 crc kubenswrapper[4725]: I1002 11:49:45.291063 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c380fcbf-ae8b-46ae-ba85-e0c55a190ae3" path="/var/lib/kubelet/pods/c380fcbf-ae8b-46ae-ba85-e0c55a190ae3/volumes" Oct 02 11:49:45 crc kubenswrapper[4725]: I1002 11:49:45.350073 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be5ed584-4418-4447-8ed6-2e89c70e903b","Type":"ContainerStarted","Data":"8d891814a4a12b915ea293d7192ab8e20eb44c1d383985cd766710cdbdfcf1d3"} Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.362968 4725 generic.go:334] "Generic (PLEG): container finished" podID="130b74c4-570d-4e65-a6cb-3b295d5caeae" containerID="2f676c00bffbc8e8ed8bb4bbace441ee23bc25af0f57fa25630300a00d945ce8" exitCode=0 Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.363405 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"130b74c4-570d-4e65-a6cb-3b295d5caeae","Type":"ContainerDied","Data":"2f676c00bffbc8e8ed8bb4bbace441ee23bc25af0f57fa25630300a00d945ce8"} Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.369910 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be5ed584-4418-4447-8ed6-2e89c70e903b","Type":"ContainerStarted","Data":"ab5ef9e1167dfbc1429b15585d13583fccde55427f462a74c908ad5f82ba3a6c"} Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.369972 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be5ed584-4418-4447-8ed6-2e89c70e903b","Type":"ContainerStarted","Data":"35aec43c07d4eeffc03073c8a9470d3a7945eca65d48a5b4d60b0f680f6626a7"} Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.397643 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.397626596 podStartE2EDuration="2.397626596s" podCreationTimestamp="2025-10-02 11:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:46.39476225 +0000 UTC m=+1306.302261723" watchObservedRunningTime="2025-10-02 11:49:46.397626596 +0000 UTC m=+1306.305126059" Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.728078 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.867824 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-config-data\") pod \"130b74c4-570d-4e65-a6cb-3b295d5caeae\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.868249 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-combined-ca-bundle\") pod \"130b74c4-570d-4e65-a6cb-3b295d5caeae\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.868322 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv78j\" (UniqueName: \"kubernetes.io/projected/130b74c4-570d-4e65-a6cb-3b295d5caeae-kube-api-access-xv78j\") pod \"130b74c4-570d-4e65-a6cb-3b295d5caeae\" (UID: \"130b74c4-570d-4e65-a6cb-3b295d5caeae\") " Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.882080 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/130b74c4-570d-4e65-a6cb-3b295d5caeae-kube-api-access-xv78j" (OuterVolumeSpecName: "kube-api-access-xv78j") pod "130b74c4-570d-4e65-a6cb-3b295d5caeae" (UID: "130b74c4-570d-4e65-a6cb-3b295d5caeae"). InnerVolumeSpecName "kube-api-access-xv78j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.903555 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-config-data" (OuterVolumeSpecName: "config-data") pod "130b74c4-570d-4e65-a6cb-3b295d5caeae" (UID: "130b74c4-570d-4e65-a6cb-3b295d5caeae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.904197 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "130b74c4-570d-4e65-a6cb-3b295d5caeae" (UID: "130b74c4-570d-4e65-a6cb-3b295d5caeae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.970342 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv78j\" (UniqueName: \"kubernetes.io/projected/130b74c4-570d-4e65-a6cb-3b295d5caeae-kube-api-access-xv78j\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.970388 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:46 crc kubenswrapper[4725]: I1002 11:49:46.970400 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130b74c4-570d-4e65-a6cb-3b295d5caeae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.380807 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.380790 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"130b74c4-570d-4e65-a6cb-3b295d5caeae","Type":"ContainerDied","Data":"bdf37a3d72906a1cf237d05a8fe11f520f5cb162ee95c6156991b80a93f4d0ca"} Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.381545 4725 scope.go:117] "RemoveContainer" containerID="2f676c00bffbc8e8ed8bb4bbace441ee23bc25af0f57fa25630300a00d945ce8" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.404235 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.419152 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.431836 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:49:47 crc kubenswrapper[4725]: E1002 11:49:47.432249 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="130b74c4-570d-4e65-a6cb-3b295d5caeae" containerName="nova-scheduler-scheduler" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.432266 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="130b74c4-570d-4e65-a6cb-3b295d5caeae" containerName="nova-scheduler-scheduler" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.432451 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="130b74c4-570d-4e65-a6cb-3b295d5caeae" containerName="nova-scheduler-scheduler" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.433058 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.435629 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.445204 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.581350 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zctvp\" (UniqueName: \"kubernetes.io/projected/19a3659e-e721-4f41-932d-978e69b77755-kube-api-access-zctvp\") pod \"nova-scheduler-0\" (UID: \"19a3659e-e721-4f41-932d-978e69b77755\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.581521 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19a3659e-e721-4f41-932d-978e69b77755-config-data\") pod \"nova-scheduler-0\" (UID: \"19a3659e-e721-4f41-932d-978e69b77755\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.581545 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a3659e-e721-4f41-932d-978e69b77755-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"19a3659e-e721-4f41-932d-978e69b77755\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.682840 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19a3659e-e721-4f41-932d-978e69b77755-config-data\") pod \"nova-scheduler-0\" (UID: \"19a3659e-e721-4f41-932d-978e69b77755\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.682881 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a3659e-e721-4f41-932d-978e69b77755-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"19a3659e-e721-4f41-932d-978e69b77755\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.682946 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zctvp\" (UniqueName: \"kubernetes.io/projected/19a3659e-e721-4f41-932d-978e69b77755-kube-api-access-zctvp\") pod \"nova-scheduler-0\" (UID: \"19a3659e-e721-4f41-932d-978e69b77755\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.688955 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a3659e-e721-4f41-932d-978e69b77755-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"19a3659e-e721-4f41-932d-978e69b77755\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.689017 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19a3659e-e721-4f41-932d-978e69b77755-config-data\") pod \"nova-scheduler-0\" (UID: \"19a3659e-e721-4f41-932d-978e69b77755\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.708495 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zctvp\" (UniqueName: \"kubernetes.io/projected/19a3659e-e721-4f41-932d-978e69b77755-kube-api-access-zctvp\") pod \"nova-scheduler-0\" (UID: \"19a3659e-e721-4f41-932d-978e69b77755\") " pod="openstack/nova-scheduler-0" Oct 02 11:49:47 crc kubenswrapper[4725]: I1002 11:49:47.750072 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 11:49:48 crc kubenswrapper[4725]: I1002 11:49:48.199266 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 11:49:48 crc kubenswrapper[4725]: W1002 11:49:48.203443 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19a3659e_e721_4f41_932d_978e69b77755.slice/crio-ea25ba3f5a0a5355ad5ec3eee3b26b1c77608d45e3517dc490c088c582739bf3 WatchSource:0}: Error finding container ea25ba3f5a0a5355ad5ec3eee3b26b1c77608d45e3517dc490c088c582739bf3: Status 404 returned error can't find the container with id ea25ba3f5a0a5355ad5ec3eee3b26b1c77608d45e3517dc490c088c582739bf3 Oct 02 11:49:48 crc kubenswrapper[4725]: I1002 11:49:48.392341 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"19a3659e-e721-4f41-932d-978e69b77755","Type":"ContainerStarted","Data":"ea25ba3f5a0a5355ad5ec3eee3b26b1c77608d45e3517dc490c088c582739bf3"} Oct 02 11:49:49 crc kubenswrapper[4725]: I1002 11:49:49.284739 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="130b74c4-570d-4e65-a6cb-3b295d5caeae" path="/var/lib/kubelet/pods/130b74c4-570d-4e65-a6cb-3b295d5caeae/volumes" Oct 02 11:49:49 crc kubenswrapper[4725]: I1002 11:49:49.402834 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"19a3659e-e721-4f41-932d-978e69b77755","Type":"ContainerStarted","Data":"863b19f5257dd5b07c3d8329082abbe47459af76b62c902467b2524470094b2f"} Oct 02 11:49:49 crc kubenswrapper[4725]: I1002 11:49:49.423658 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.423622292 podStartE2EDuration="2.423622292s" podCreationTimestamp="2025-10-02 11:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:49:49.42053812 +0000 UTC m=+1309.328037583" watchObservedRunningTime="2025-10-02 11:49:49.423622292 +0000 UTC m=+1309.331121755" Oct 02 11:49:49 crc kubenswrapper[4725]: I1002 11:49:49.753150 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 02 11:49:49 crc kubenswrapper[4725]: I1002 11:49:49.753245 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 02 11:49:52 crc kubenswrapper[4725]: I1002 11:49:52.699389 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 11:49:52 crc kubenswrapper[4725]: I1002 11:49:52.699670 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 11:49:52 crc kubenswrapper[4725]: I1002 11:49:52.750777 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 02 11:49:53 crc kubenswrapper[4725]: I1002 11:49:53.714000 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91849e10-6e8e-466f-a603-1c15622941c6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 11:49:53 crc kubenswrapper[4725]: I1002 11:49:53.714078 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91849e10-6e8e-466f-a603-1c15622941c6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 11:49:54 crc kubenswrapper[4725]: I1002 11:49:54.753258 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 02 11:49:54 crc kubenswrapper[4725]: I1002 11:49:54.753332 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 02 11:49:55 crc kubenswrapper[4725]: I1002 11:49:55.765884 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="be5ed584-4418-4447-8ed6-2e89c70e903b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 11:49:55 crc kubenswrapper[4725]: I1002 11:49:55.765866 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="be5ed584-4418-4447-8ed6-2e89c70e903b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 11:49:57 crc kubenswrapper[4725]: I1002 11:49:57.750748 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 02 11:49:57 crc kubenswrapper[4725]: I1002 11:49:57.793916 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 02 11:49:58 crc kubenswrapper[4725]: I1002 11:49:58.511510 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 02 11:50:02 crc kubenswrapper[4725]: I1002 11:50:02.707707 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 02 11:50:02 crc kubenswrapper[4725]: I1002 11:50:02.708291 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 02 11:50:02 crc kubenswrapper[4725]: I1002 11:50:02.708668 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 02 11:50:02 crc kubenswrapper[4725]: I1002 11:50:02.708759 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 02 11:50:02 crc kubenswrapper[4725]: I1002 11:50:02.717144 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 02 11:50:02 crc kubenswrapper[4725]: I1002 11:50:02.718390 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 02 11:50:04 crc kubenswrapper[4725]: I1002 11:50:04.762710 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 02 11:50:04 crc kubenswrapper[4725]: I1002 11:50:04.763212 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 02 11:50:04 crc kubenswrapper[4725]: I1002 11:50:04.769300 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 02 11:50:04 crc kubenswrapper[4725]: I1002 11:50:04.771941 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 02 11:50:05 crc kubenswrapper[4725]: I1002 11:50:05.583667 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 02 11:50:15 crc kubenswrapper[4725]: I1002 11:50:15.618079 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 11:50:16 crc kubenswrapper[4725]: I1002 11:50:16.337039 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 11:50:19 crc kubenswrapper[4725]: I1002 11:50:19.731404 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f6e13124-0d0a-48a8-a1a7-0127e60454e1" containerName="rabbitmq" containerID="cri-o://f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f" gracePeriod=604796 Oct 02 11:50:20 crc kubenswrapper[4725]: I1002 11:50:20.180563 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="dd00f5ac-14f8-47ba-8310-d6d279ffad6a" containerName="rabbitmq" containerID="cri-o://ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2" gracePeriod=604797 Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.331739 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.462362 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-plugins-conf\") pod \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.462431 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.462491 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-confd\") pod \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.462533 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-server-conf\") pod \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.462568 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-plugins\") pod \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.462595 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6e13124-0d0a-48a8-a1a7-0127e60454e1-erlang-cookie-secret\") pod \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.462628 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l5kz\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-kube-api-access-4l5kz\") pod \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.462674 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-tls\") pod \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.462706 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-erlang-cookie\") pod \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.462807 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-config-data\") pod \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.462893 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6e13124-0d0a-48a8-a1a7-0127e60454e1-pod-info\") pod \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\" (UID: \"f6e13124-0d0a-48a8-a1a7-0127e60454e1\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.463241 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f6e13124-0d0a-48a8-a1a7-0127e60454e1" (UID: "f6e13124-0d0a-48a8-a1a7-0127e60454e1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.463493 4725 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.464467 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f6e13124-0d0a-48a8-a1a7-0127e60454e1" (UID: "f6e13124-0d0a-48a8-a1a7-0127e60454e1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.472606 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f6e13124-0d0a-48a8-a1a7-0127e60454e1" (UID: "f6e13124-0d0a-48a8-a1a7-0127e60454e1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.475375 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f6e13124-0d0a-48a8-a1a7-0127e60454e1" (UID: "f6e13124-0d0a-48a8-a1a7-0127e60454e1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.476409 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e13124-0d0a-48a8-a1a7-0127e60454e1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f6e13124-0d0a-48a8-a1a7-0127e60454e1" (UID: "f6e13124-0d0a-48a8-a1a7-0127e60454e1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.476486 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-kube-api-access-4l5kz" (OuterVolumeSpecName: "kube-api-access-4l5kz") pod "f6e13124-0d0a-48a8-a1a7-0127e60454e1" (UID: "f6e13124-0d0a-48a8-a1a7-0127e60454e1"). InnerVolumeSpecName "kube-api-access-4l5kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.478747 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f6e13124-0d0a-48a8-a1a7-0127e60454e1-pod-info" (OuterVolumeSpecName: "pod-info") pod "f6e13124-0d0a-48a8-a1a7-0127e60454e1" (UID: "f6e13124-0d0a-48a8-a1a7-0127e60454e1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.478944 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "f6e13124-0d0a-48a8-a1a7-0127e60454e1" (UID: "f6e13124-0d0a-48a8-a1a7-0127e60454e1"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.520600 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-config-data" (OuterVolumeSpecName: "config-data") pod "f6e13124-0d0a-48a8-a1a7-0127e60454e1" (UID: "f6e13124-0d0a-48a8-a1a7-0127e60454e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.537577 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-server-conf" (OuterVolumeSpecName: "server-conf") pod "f6e13124-0d0a-48a8-a1a7-0127e60454e1" (UID: "f6e13124-0d0a-48a8-a1a7-0127e60454e1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.565246 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.565289 4725 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6e13124-0d0a-48a8-a1a7-0127e60454e1-pod-info\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.565328 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.565342 4725 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6e13124-0d0a-48a8-a1a7-0127e60454e1-server-conf\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.565354 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.565366 4725 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6e13124-0d0a-48a8-a1a7-0127e60454e1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.565378 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l5kz\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-kube-api-access-4l5kz\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.565389 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.565400 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.597591 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.624167 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f6e13124-0d0a-48a8-a1a7-0127e60454e1" (UID: "f6e13124-0d0a-48a8-a1a7-0127e60454e1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.667038 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6e13124-0d0a-48a8-a1a7-0127e60454e1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.667071 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.730782 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.762294 4725 generic.go:334] "Generic (PLEG): container finished" podID="f6e13124-0d0a-48a8-a1a7-0127e60454e1" containerID="f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f" exitCode=0 Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.762375 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f6e13124-0d0a-48a8-a1a7-0127e60454e1","Type":"ContainerDied","Data":"f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f"} Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.762399 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.762420 4725 scope.go:117] "RemoveContainer" containerID="f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.762407 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f6e13124-0d0a-48a8-a1a7-0127e60454e1","Type":"ContainerDied","Data":"7c70897cd583ed34a231b1d7f9f502761547dc042199e74479edfb67b6a3d68e"} Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.770277 4725 generic.go:334] "Generic (PLEG): container finished" podID="dd00f5ac-14f8-47ba-8310-d6d279ffad6a" containerID="ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2" exitCode=0 Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.770338 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dd00f5ac-14f8-47ba-8310-d6d279ffad6a","Type":"ContainerDied","Data":"ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2"} Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.770369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dd00f5ac-14f8-47ba-8310-d6d279ffad6a","Type":"ContainerDied","Data":"01b76edcfb5f78b25d06b3d9eb1967828b129b4b50b6198a094b8a3141f5d1b7"} Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.770443 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.800113 4725 scope.go:117] "RemoveContainer" containerID="bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.816930 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.851021 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.854660 4725 scope.go:117] "RemoveContainer" containerID="f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f" Oct 02 11:50:26 crc kubenswrapper[4725]: E1002 11:50:26.855148 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f\": container with ID starting with f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f not found: ID does not exist" containerID="f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.855196 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f"} err="failed to get container status \"f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f\": rpc error: code = NotFound desc = could not find container \"f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f\": container with ID starting with f0127cdaadbb6ca61a09db9715a79e891b5851a7c4ea18a2dacd7e44d8ddc81f not found: ID does not exist" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.855229 4725 scope.go:117] "RemoveContainer" containerID="bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d" Oct 02 11:50:26 crc kubenswrapper[4725]: E1002 11:50:26.855550 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d\": container with ID starting with bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d not found: ID does not exist" containerID="bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.855591 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d"} err="failed to get container status \"bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d\": rpc error: code = NotFound desc = could not find container \"bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d\": container with ID starting with bacd66a4de891f821fc9e62074ad3c6ff123713752fe33a2ef45dc59c1ee727d not found: ID does not exist" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.855613 4725 scope.go:117] "RemoveContainer" containerID="ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.859641 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 11:50:26 crc kubenswrapper[4725]: E1002 11:50:26.860208 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e13124-0d0a-48a8-a1a7-0127e60454e1" containerName="setup-container" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.860236 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e13124-0d0a-48a8-a1a7-0127e60454e1" containerName="setup-container" Oct 02 11:50:26 crc kubenswrapper[4725]: E1002 11:50:26.860253 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd00f5ac-14f8-47ba-8310-d6d279ffad6a" containerName="setup-container" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.860260 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd00f5ac-14f8-47ba-8310-d6d279ffad6a" containerName="setup-container" Oct 02 11:50:26 crc kubenswrapper[4725]: E1002 11:50:26.860270 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd00f5ac-14f8-47ba-8310-d6d279ffad6a" containerName="rabbitmq" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.860276 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd00f5ac-14f8-47ba-8310-d6d279ffad6a" containerName="rabbitmq" Oct 02 11:50:26 crc kubenswrapper[4725]: E1002 11:50:26.860305 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e13124-0d0a-48a8-a1a7-0127e60454e1" containerName="rabbitmq" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.860310 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e13124-0d0a-48a8-a1a7-0127e60454e1" containerName="rabbitmq" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.860506 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e13124-0d0a-48a8-a1a7-0127e60454e1" containerName="rabbitmq" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.860523 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd00f5ac-14f8-47ba-8310-d6d279ffad6a" containerName="rabbitmq" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.861695 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.864052 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.864322 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.864516 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.865038 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.865343 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.865492 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.866564 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l4nhp" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.869524 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p7nl\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-kube-api-access-2p7nl\") pod \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.869642 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-confd\") pod \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.869828 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-erlang-cookie-secret\") pod \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.869876 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-config-data\") pod \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.869922 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-plugins-conf\") pod \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.869962 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-server-conf\") pod \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.870003 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-tls\") pod \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.870066 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.870125 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-erlang-cookie\") pod \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.870185 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-pod-info\") pod \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.870223 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-plugins\") pod \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\" (UID: \"dd00f5ac-14f8-47ba-8310-d6d279ffad6a\") " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.872190 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dd00f5ac-14f8-47ba-8310-d6d279ffad6a" (UID: "dd00f5ac-14f8-47ba-8310-d6d279ffad6a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.872808 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dd00f5ac-14f8-47ba-8310-d6d279ffad6a" (UID: "dd00f5ac-14f8-47ba-8310-d6d279ffad6a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.873337 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.874820 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dd00f5ac-14f8-47ba-8310-d6d279ffad6a" (UID: "dd00f5ac-14f8-47ba-8310-d6d279ffad6a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.878125 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-kube-api-access-2p7nl" (OuterVolumeSpecName: "kube-api-access-2p7nl") pod "dd00f5ac-14f8-47ba-8310-d6d279ffad6a" (UID: "dd00f5ac-14f8-47ba-8310-d6d279ffad6a"). InnerVolumeSpecName "kube-api-access-2p7nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.879839 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-pod-info" (OuterVolumeSpecName: "pod-info") pod "dd00f5ac-14f8-47ba-8310-d6d279ffad6a" (UID: "dd00f5ac-14f8-47ba-8310-d6d279ffad6a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.880244 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dd00f5ac-14f8-47ba-8310-d6d279ffad6a" (UID: "dd00f5ac-14f8-47ba-8310-d6d279ffad6a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.880363 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "dd00f5ac-14f8-47ba-8310-d6d279ffad6a" (UID: "dd00f5ac-14f8-47ba-8310-d6d279ffad6a"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.888596 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dd00f5ac-14f8-47ba-8310-d6d279ffad6a" (UID: "dd00f5ac-14f8-47ba-8310-d6d279ffad6a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.909301 4725 scope.go:117] "RemoveContainer" containerID="f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.914890 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-config-data" (OuterVolumeSpecName: "config-data") pod "dd00f5ac-14f8-47ba-8310-d6d279ffad6a" (UID: "dd00f5ac-14f8-47ba-8310-d6d279ffad6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.939299 4725 scope.go:117] "RemoveContainer" containerID="ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2" Oct 02 11:50:26 crc kubenswrapper[4725]: E1002 11:50:26.939959 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2\": container with ID starting with ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2 not found: ID does not exist" containerID="ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.940008 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2"} err="failed to get container status \"ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2\": rpc error: code = NotFound desc = could not find container \"ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2\": container with ID starting with ef7e551a49edaa49920d2280f97c70949060d3a8e4dc55665ae65bb89727dad2 not found: ID does not exist" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.940036 4725 scope.go:117] "RemoveContainer" containerID="f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505" Oct 02 11:50:26 crc kubenswrapper[4725]: E1002 11:50:26.940498 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505\": container with ID starting with f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505 not found: ID does not exist" containerID="f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.940545 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505"} err="failed to get container status \"f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505\": rpc error: code = NotFound desc = could not find container \"f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505\": container with ID starting with f44f7cb469130796c8a0832bff5abd432d24ab4cbd4a6089dfa2c5d56ebcc505 not found: ID does not exist" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.967073 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-server-conf" (OuterVolumeSpecName: "server-conf") pod "dd00f5ac-14f8-47ba-8310-d6d279ffad6a" (UID: "dd00f5ac-14f8-47ba-8310-d6d279ffad6a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.972310 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.972502 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.972638 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcqv\" (UniqueName: \"kubernetes.io/projected/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-kube-api-access-6jcqv\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.972799 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.972908 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.973013 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-config-data\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.973109 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.973197 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.973286 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.973400 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.974096 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.974257 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p7nl\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-kube-api-access-2p7nl\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.974340 4725 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.974474 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.974551 4725 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.974633 4725 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-server-conf\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.974705 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.974838 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.975002 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.975090 4725 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-pod-info\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.975173 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:26 crc kubenswrapper[4725]: I1002 11:50:26.991595 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dd00f5ac-14f8-47ba-8310-d6d279ffad6a" (UID: "dd00f5ac-14f8-47ba-8310-d6d279ffad6a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.009673 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.076979 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcqv\" (UniqueName: \"kubernetes.io/projected/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-kube-api-access-6jcqv\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077061 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077094 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077126 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-config-data\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077151 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077171 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077191 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077225 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077266 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077311 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077361 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077431 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dd00f5ac-14f8-47ba-8310-d6d279ffad6a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077446 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.077960 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.078306 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.079543 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.080012 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-config-data\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.080137 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.080254 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.081046 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.082303 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.085312 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.090332 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.093419 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcqv\" (UniqueName: \"kubernetes.io/projected/a1b44c9c-40f8-4c5e-8616-76e24df2ee97-kube-api-access-6jcqv\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.117604 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a1b44c9c-40f8-4c5e-8616-76e24df2ee97\") " pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.183900 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.192388 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.194208 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.203431 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.206184 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.209176 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ptfjh" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.210059 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.210648 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.211247 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.211593 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.211844 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.217510 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.221861 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.287511 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd00f5ac-14f8-47ba-8310-d6d279ffad6a" path="/var/lib/kubelet/pods/dd00f5ac-14f8-47ba-8310-d6d279ffad6a/volumes" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.288521 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6e13124-0d0a-48a8-a1a7-0127e60454e1" path="/var/lib/kubelet/pods/f6e13124-0d0a-48a8-a1a7-0127e60454e1/volumes" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.383095 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc55a26c-8109-4994-812d-1dd87f46d791-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.383197 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.383225 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh6bw\" (UniqueName: \"kubernetes.io/projected/bc55a26c-8109-4994-812d-1dd87f46d791-kube-api-access-nh6bw\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.383242 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc55a26c-8109-4994-812d-1dd87f46d791-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.383270 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.383295 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc55a26c-8109-4994-812d-1dd87f46d791-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.383311 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.383329 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc55a26c-8109-4994-812d-1dd87f46d791-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.383368 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.383383 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc55a26c-8109-4994-812d-1dd87f46d791-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.383401 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.484998 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.485048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh6bw\" (UniqueName: \"kubernetes.io/projected/bc55a26c-8109-4994-812d-1dd87f46d791-kube-api-access-nh6bw\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.485071 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc55a26c-8109-4994-812d-1dd87f46d791-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.485126 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.485164 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc55a26c-8109-4994-812d-1dd87f46d791-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.485184 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.485201 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc55a26c-8109-4994-812d-1dd87f46d791-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.485240 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.485261 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc55a26c-8109-4994-812d-1dd87f46d791-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.485289 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.485333 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc55a26c-8109-4994-812d-1dd87f46d791-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.486310 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.486585 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.487233 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc55a26c-8109-4994-812d-1dd87f46d791-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.487376 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.487469 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc55a26c-8109-4994-812d-1dd87f46d791-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.487513 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc55a26c-8109-4994-812d-1dd87f46d791-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.490634 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc55a26c-8109-4994-812d-1dd87f46d791-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.493493 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc55a26c-8109-4994-812d-1dd87f46d791-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.493569 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.496145 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc55a26c-8109-4994-812d-1dd87f46d791-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.523376 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh6bw\" (UniqueName: \"kubernetes.io/projected/bc55a26c-8109-4994-812d-1dd87f46d791-kube-api-access-nh6bw\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.535133 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc55a26c-8109-4994-812d-1dd87f46d791\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.574755 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.686779 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 11:50:27 crc kubenswrapper[4725]: I1002 11:50:27.783021 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1b44c9c-40f8-4c5e-8616-76e24df2ee97","Type":"ContainerStarted","Data":"acc4dfda6698d28dd17a21eb2c1e25202fabc48bc87f8cdb3bcd4fd3c3c81ba5"} Oct 02 11:50:28 crc kubenswrapper[4725]: I1002 11:50:28.018366 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 11:50:28 crc kubenswrapper[4725]: I1002 11:50:28.793925 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc55a26c-8109-4994-812d-1dd87f46d791","Type":"ContainerStarted","Data":"bb08531e6932433e8a9790c8edf40d59dbf01fd6b92b8ed895b6c4b8db8ce54d"} Oct 02 11:50:29 crc kubenswrapper[4725]: I1002 11:50:29.804459 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1b44c9c-40f8-4c5e-8616-76e24df2ee97","Type":"ContainerStarted","Data":"be3028194e9b3842da4c5e5b42b76acfa730f98d251abbf0f494031c9bc2a756"} Oct 02 11:50:30 crc kubenswrapper[4725]: I1002 11:50:30.818566 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc55a26c-8109-4994-812d-1dd87f46d791","Type":"ContainerStarted","Data":"0cc4fb6b0924aa373db7bbf701cc9088cf32f1129535f2320a7ebe90ee2b79e1"} Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.071419 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-z7bsq"] Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.072983 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.076102 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.092028 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-z7bsq"] Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.197019 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.197104 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.197139 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.197344 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.197431 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-svc\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.197479 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxf6\" (UniqueName: \"kubernetes.io/projected/f52b939b-8e72-44df-9079-4152d0b8fb0a-kube-api-access-djxf6\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.197595 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-config\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.299640 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.301166 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.301085 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.301248 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.301331 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.302122 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.302267 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-svc\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.302349 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxf6\" (UniqueName: \"kubernetes.io/projected/f52b939b-8e72-44df-9079-4152d0b8fb0a-kube-api-access-djxf6\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.302376 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.302513 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-config\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.303419 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.305609 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-config\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.305998 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-svc\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.325698 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxf6\" (UniqueName: \"kubernetes.io/projected/f52b939b-8e72-44df-9079-4152d0b8fb0a-kube-api-access-djxf6\") pod \"dnsmasq-dns-67b789f86c-z7bsq\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:31 crc kubenswrapper[4725]: I1002 11:50:31.406841 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:32 crc kubenswrapper[4725]: I1002 11:50:32.750047 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-z7bsq"] Oct 02 11:50:32 crc kubenswrapper[4725]: W1002 11:50:32.756061 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf52b939b_8e72_44df_9079_4152d0b8fb0a.slice/crio-5fac5a098b3f2ca7b10d2587e8969a38ea49b2dc3915e625ee35e5f377990eb2 WatchSource:0}: Error finding container 5fac5a098b3f2ca7b10d2587e8969a38ea49b2dc3915e625ee35e5f377990eb2: Status 404 returned error can't find the container with id 5fac5a098b3f2ca7b10d2587e8969a38ea49b2dc3915e625ee35e5f377990eb2 Oct 02 11:50:32 crc kubenswrapper[4725]: I1002 11:50:32.831945 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" event={"ID":"f52b939b-8e72-44df-9079-4152d0b8fb0a","Type":"ContainerStarted","Data":"5fac5a098b3f2ca7b10d2587e8969a38ea49b2dc3915e625ee35e5f377990eb2"} Oct 02 11:50:33 crc kubenswrapper[4725]: I1002 11:50:33.844388 4725 generic.go:334] "Generic (PLEG): container finished" podID="f52b939b-8e72-44df-9079-4152d0b8fb0a" containerID="9cc068450f0688c427cc3ae797b5cb85e23dd4bdb8a242f071835f84cdcd1755" exitCode=0 Oct 02 11:50:33 crc kubenswrapper[4725]: I1002 11:50:33.844494 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" event={"ID":"f52b939b-8e72-44df-9079-4152d0b8fb0a","Type":"ContainerDied","Data":"9cc068450f0688c427cc3ae797b5cb85e23dd4bdb8a242f071835f84cdcd1755"} Oct 02 11:50:34 crc kubenswrapper[4725]: I1002 11:50:34.857403 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" event={"ID":"f52b939b-8e72-44df-9079-4152d0b8fb0a","Type":"ContainerStarted","Data":"1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d"} Oct 02 11:50:34 crc kubenswrapper[4725]: I1002 11:50:34.858519 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:34 crc kubenswrapper[4725]: I1002 11:50:34.884223 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" podStartSLOduration=3.884200753 podStartE2EDuration="3.884200753s" podCreationTimestamp="2025-10-02 11:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:50:34.877409492 +0000 UTC m=+1354.784908975" watchObservedRunningTime="2025-10-02 11:50:34.884200753 +0000 UTC m=+1354.791700216" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.409670 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.509190 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-2qjjr"] Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.509429 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" podUID="88cb4442-05de-43fa-b0ff-bb12e23ee408" containerName="dnsmasq-dns" containerID="cri-o://7a5f78933a4b003f821b67f6b332eb4ed60867a17c06faf942bb158b05938989" gracePeriod=10 Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.683311 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-xrt5j"] Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.694891 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-xrt5j"] Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.695001 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: E1002 11:50:41.789284 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88cb4442_05de_43fa_b0ff_bb12e23ee408.slice/crio-7a5f78933a4b003f821b67f6b332eb4ed60867a17c06faf942bb158b05938989.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88cb4442_05de_43fa_b0ff_bb12e23ee408.slice/crio-conmon-7a5f78933a4b003f821b67f6b332eb4ed60867a17c06faf942bb158b05938989.scope\": RecentStats: unable to find data in memory cache]" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.836710 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zwsg\" (UniqueName: \"kubernetes.io/projected/d151b71f-87ba-40a0-8858-99f129ac1e55-kube-api-access-5zwsg\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.837062 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.837090 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.837109 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.837126 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.837194 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.837240 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-config\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.934131 4725 generic.go:334] "Generic (PLEG): container finished" podID="88cb4442-05de-43fa-b0ff-bb12e23ee408" containerID="7a5f78933a4b003f821b67f6b332eb4ed60867a17c06faf942bb158b05938989" exitCode=0 Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.934165 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" event={"ID":"88cb4442-05de-43fa-b0ff-bb12e23ee408","Type":"ContainerDied","Data":"7a5f78933a4b003f821b67f6b332eb4ed60867a17c06faf942bb158b05938989"} Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.939028 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.939064 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.939080 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.939097 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.939156 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.939194 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-config\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.939220 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zwsg\" (UniqueName: \"kubernetes.io/projected/d151b71f-87ba-40a0-8858-99f129ac1e55-kube-api-access-5zwsg\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.941111 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.941668 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.941746 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.942204 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.942533 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.942701 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d151b71f-87ba-40a0-8858-99f129ac1e55-config\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:41 crc kubenswrapper[4725]: I1002 11:50:41.963136 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zwsg\" (UniqueName: \"kubernetes.io/projected/d151b71f-87ba-40a0-8858-99f129ac1e55-kube-api-access-5zwsg\") pod \"dnsmasq-dns-cb6ffcf87-xrt5j\" (UID: \"d151b71f-87ba-40a0-8858-99f129ac1e55\") " pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.027001 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.152309 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.243702 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-svc\") pod \"88cb4442-05de-43fa-b0ff-bb12e23ee408\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.244245 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-nb\") pod \"88cb4442-05de-43fa-b0ff-bb12e23ee408\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.244323 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-sb\") pod \"88cb4442-05de-43fa-b0ff-bb12e23ee408\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.244422 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7592\" (UniqueName: \"kubernetes.io/projected/88cb4442-05de-43fa-b0ff-bb12e23ee408-kube-api-access-l7592\") pod \"88cb4442-05de-43fa-b0ff-bb12e23ee408\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.244548 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-swift-storage-0\") pod \"88cb4442-05de-43fa-b0ff-bb12e23ee408\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.245397 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-config\") pod \"88cb4442-05de-43fa-b0ff-bb12e23ee408\" (UID: \"88cb4442-05de-43fa-b0ff-bb12e23ee408\") " Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.248621 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88cb4442-05de-43fa-b0ff-bb12e23ee408-kube-api-access-l7592" (OuterVolumeSpecName: "kube-api-access-l7592") pod "88cb4442-05de-43fa-b0ff-bb12e23ee408" (UID: "88cb4442-05de-43fa-b0ff-bb12e23ee408"). InnerVolumeSpecName "kube-api-access-l7592". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.302644 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88cb4442-05de-43fa-b0ff-bb12e23ee408" (UID: "88cb4442-05de-43fa-b0ff-bb12e23ee408"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.303536 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-config" (OuterVolumeSpecName: "config") pod "88cb4442-05de-43fa-b0ff-bb12e23ee408" (UID: "88cb4442-05de-43fa-b0ff-bb12e23ee408"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.305889 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "88cb4442-05de-43fa-b0ff-bb12e23ee408" (UID: "88cb4442-05de-43fa-b0ff-bb12e23ee408"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.306649 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "88cb4442-05de-43fa-b0ff-bb12e23ee408" (UID: "88cb4442-05de-43fa-b0ff-bb12e23ee408"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.319640 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "88cb4442-05de-43fa-b0ff-bb12e23ee408" (UID: "88cb4442-05de-43fa-b0ff-bb12e23ee408"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.348065 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.348103 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.348112 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.348123 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.348134 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88cb4442-05de-43fa-b0ff-bb12e23ee408-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.348142 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7592\" (UniqueName: \"kubernetes.io/projected/88cb4442-05de-43fa-b0ff-bb12e23ee408-kube-api-access-l7592\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.539850 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-xrt5j"] Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.944771 4725 generic.go:334] "Generic (PLEG): container finished" podID="d151b71f-87ba-40a0-8858-99f129ac1e55" containerID="1e8ffd6b7da0d524e17149e64758bf3d0e27a46951e945756f1a38b2b40d52e1" exitCode=0 Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.944821 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" event={"ID":"d151b71f-87ba-40a0-8858-99f129ac1e55","Type":"ContainerDied","Data":"1e8ffd6b7da0d524e17149e64758bf3d0e27a46951e945756f1a38b2b40d52e1"} Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.945271 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" event={"ID":"d151b71f-87ba-40a0-8858-99f129ac1e55","Type":"ContainerStarted","Data":"58c8c68817c9c9e3a1162181b1c1b86f175371450095d2418609d7031b53025a"} Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.947368 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" event={"ID":"88cb4442-05de-43fa-b0ff-bb12e23ee408","Type":"ContainerDied","Data":"a8bae5cb681d5acd86517fa13efe3790c5e0ff110db1ea283270e34b44bcb70e"} Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.947408 4725 scope.go:117] "RemoveContainer" containerID="7a5f78933a4b003f821b67f6b332eb4ed60867a17c06faf942bb158b05938989" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.947451 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-2qjjr" Oct 02 11:50:42 crc kubenswrapper[4725]: I1002 11:50:42.997432 4725 scope.go:117] "RemoveContainer" containerID="71bae2c29dbf3cebc0c12d77473b674e76a263faa77e11f0a02d771d4290ab68" Oct 02 11:50:43 crc kubenswrapper[4725]: I1002 11:50:43.142756 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-2qjjr"] Oct 02 11:50:43 crc kubenswrapper[4725]: I1002 11:50:43.151100 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-2qjjr"] Oct 02 11:50:43 crc kubenswrapper[4725]: I1002 11:50:43.281535 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88cb4442-05de-43fa-b0ff-bb12e23ee408" path="/var/lib/kubelet/pods/88cb4442-05de-43fa-b0ff-bb12e23ee408/volumes" Oct 02 11:50:43 crc kubenswrapper[4725]: I1002 11:50:43.963166 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" event={"ID":"d151b71f-87ba-40a0-8858-99f129ac1e55","Type":"ContainerStarted","Data":"8db6a6dc8f330b63e687b9c5f794ab311f2fe48228b2ed771d3aa9202e4ece44"} Oct 02 11:50:43 crc kubenswrapper[4725]: I1002 11:50:43.963847 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.028765 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.070065 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-xrt5j" podStartSLOduration=11.070044202 podStartE2EDuration="11.070044202s" podCreationTimestamp="2025-10-02 11:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:50:43.982203286 +0000 UTC m=+1363.889702769" watchObservedRunningTime="2025-10-02 11:50:52.070044202 +0000 UTC m=+1371.977543655" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.163663 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-z7bsq"] Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.163955 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" podUID="f52b939b-8e72-44df-9079-4152d0b8fb0a" containerName="dnsmasq-dns" containerID="cri-o://1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d" gracePeriod=10 Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.684637 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.738710 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-svc\") pod \"f52b939b-8e72-44df-9079-4152d0b8fb0a\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.738864 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-config\") pod \"f52b939b-8e72-44df-9079-4152d0b8fb0a\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.738916 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-openstack-edpm-ipam\") pod \"f52b939b-8e72-44df-9079-4152d0b8fb0a\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.738949 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djxf6\" (UniqueName: \"kubernetes.io/projected/f52b939b-8e72-44df-9079-4152d0b8fb0a-kube-api-access-djxf6\") pod \"f52b939b-8e72-44df-9079-4152d0b8fb0a\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.739085 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-sb\") pod \"f52b939b-8e72-44df-9079-4152d0b8fb0a\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.739133 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-swift-storage-0\") pod \"f52b939b-8e72-44df-9079-4152d0b8fb0a\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.739185 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-nb\") pod \"f52b939b-8e72-44df-9079-4152d0b8fb0a\" (UID: \"f52b939b-8e72-44df-9079-4152d0b8fb0a\") " Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.752596 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f52b939b-8e72-44df-9079-4152d0b8fb0a-kube-api-access-djxf6" (OuterVolumeSpecName: "kube-api-access-djxf6") pod "f52b939b-8e72-44df-9079-4152d0b8fb0a" (UID: "f52b939b-8e72-44df-9079-4152d0b8fb0a"). InnerVolumeSpecName "kube-api-access-djxf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.790645 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f52b939b-8e72-44df-9079-4152d0b8fb0a" (UID: "f52b939b-8e72-44df-9079-4152d0b8fb0a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.799113 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f52b939b-8e72-44df-9079-4152d0b8fb0a" (UID: "f52b939b-8e72-44df-9079-4152d0b8fb0a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.800873 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f52b939b-8e72-44df-9079-4152d0b8fb0a" (UID: "f52b939b-8e72-44df-9079-4152d0b8fb0a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.806594 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-config" (OuterVolumeSpecName: "config") pod "f52b939b-8e72-44df-9079-4152d0b8fb0a" (UID: "f52b939b-8e72-44df-9079-4152d0b8fb0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.809279 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f52b939b-8e72-44df-9079-4152d0b8fb0a" (UID: "f52b939b-8e72-44df-9079-4152d0b8fb0a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.810349 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f52b939b-8e72-44df-9079-4152d0b8fb0a" (UID: "f52b939b-8e72-44df-9079-4152d0b8fb0a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.842202 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.842233 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.842242 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.842252 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.842261 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-config\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.842271 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f52b939b-8e72-44df-9079-4152d0b8fb0a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:52 crc kubenswrapper[4725]: I1002 11:50:52.842279 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djxf6\" (UniqueName: \"kubernetes.io/projected/f52b939b-8e72-44df-9079-4152d0b8fb0a-kube-api-access-djxf6\") on node \"crc\" DevicePath \"\"" Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.050118 4725 generic.go:334] "Generic (PLEG): container finished" podID="f52b939b-8e72-44df-9079-4152d0b8fb0a" containerID="1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d" exitCode=0 Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.050167 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" event={"ID":"f52b939b-8e72-44df-9079-4152d0b8fb0a","Type":"ContainerDied","Data":"1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d"} Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.050227 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" event={"ID":"f52b939b-8e72-44df-9079-4152d0b8fb0a","Type":"ContainerDied","Data":"5fac5a098b3f2ca7b10d2587e8969a38ea49b2dc3915e625ee35e5f377990eb2"} Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.050258 4725 scope.go:117] "RemoveContainer" containerID="1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d" Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.050242 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-z7bsq" Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.075525 4725 scope.go:117] "RemoveContainer" containerID="9cc068450f0688c427cc3ae797b5cb85e23dd4bdb8a242f071835f84cdcd1755" Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.092166 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-z7bsq"] Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.101607 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-z7bsq"] Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.123588 4725 scope.go:117] "RemoveContainer" containerID="1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d" Oct 02 11:50:53 crc kubenswrapper[4725]: E1002 11:50:53.124099 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d\": container with ID starting with 1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d not found: ID does not exist" containerID="1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d" Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.124148 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d"} err="failed to get container status \"1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d\": rpc error: code = NotFound desc = could not find container \"1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d\": container with ID starting with 1149f829db54c4362fd72bcf1864049fcb5f369105679312bf31df02b0b5fe9d not found: ID does not exist" Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.124177 4725 scope.go:117] "RemoveContainer" containerID="9cc068450f0688c427cc3ae797b5cb85e23dd4bdb8a242f071835f84cdcd1755" Oct 02 11:50:53 crc kubenswrapper[4725]: E1002 11:50:53.124563 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cc068450f0688c427cc3ae797b5cb85e23dd4bdb8a242f071835f84cdcd1755\": container with ID starting with 9cc068450f0688c427cc3ae797b5cb85e23dd4bdb8a242f071835f84cdcd1755 not found: ID does not exist" containerID="9cc068450f0688c427cc3ae797b5cb85e23dd4bdb8a242f071835f84cdcd1755" Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.124602 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cc068450f0688c427cc3ae797b5cb85e23dd4bdb8a242f071835f84cdcd1755"} err="failed to get container status \"9cc068450f0688c427cc3ae797b5cb85e23dd4bdb8a242f071835f84cdcd1755\": rpc error: code = NotFound desc = could not find container \"9cc068450f0688c427cc3ae797b5cb85e23dd4bdb8a242f071835f84cdcd1755\": container with ID starting with 9cc068450f0688c427cc3ae797b5cb85e23dd4bdb8a242f071835f84cdcd1755 not found: ID does not exist" Oct 02 11:50:53 crc kubenswrapper[4725]: I1002 11:50:53.280050 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f52b939b-8e72-44df-9079-4152d0b8fb0a" path="/var/lib/kubelet/pods/f52b939b-8e72-44df-9079-4152d0b8fb0a/volumes" Oct 02 11:51:02 crc kubenswrapper[4725]: I1002 11:51:02.151962 4725 generic.go:334] "Generic (PLEG): container finished" podID="a1b44c9c-40f8-4c5e-8616-76e24df2ee97" containerID="be3028194e9b3842da4c5e5b42b76acfa730f98d251abbf0f494031c9bc2a756" exitCode=0 Oct 02 11:51:02 crc kubenswrapper[4725]: I1002 11:51:02.152062 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1b44c9c-40f8-4c5e-8616-76e24df2ee97","Type":"ContainerDied","Data":"be3028194e9b3842da4c5e5b42b76acfa730f98d251abbf0f494031c9bc2a756"} Oct 02 11:51:02 crc kubenswrapper[4725]: E1002 11:51:02.440317 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc55a26c_8109_4994_812d_1dd87f46d791.slice/crio-conmon-0cc4fb6b0924aa373db7bbf701cc9088cf32f1129535f2320a7ebe90ee2b79e1.scope\": RecentStats: unable to find data in memory cache]" Oct 02 11:51:03 crc kubenswrapper[4725]: I1002 11:51:03.161335 4725 generic.go:334] "Generic (PLEG): container finished" podID="bc55a26c-8109-4994-812d-1dd87f46d791" containerID="0cc4fb6b0924aa373db7bbf701cc9088cf32f1129535f2320a7ebe90ee2b79e1" exitCode=0 Oct 02 11:51:03 crc kubenswrapper[4725]: I1002 11:51:03.161434 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc55a26c-8109-4994-812d-1dd87f46d791","Type":"ContainerDied","Data":"0cc4fb6b0924aa373db7bbf701cc9088cf32f1129535f2320a7ebe90ee2b79e1"} Oct 02 11:51:03 crc kubenswrapper[4725]: I1002 11:51:03.164207 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1b44c9c-40f8-4c5e-8616-76e24df2ee97","Type":"ContainerStarted","Data":"5f70a0621a54574f19d6a5f82a8303d4b34a1a779e38c22fbeca23e2d08de3c6"} Oct 02 11:51:03 crc kubenswrapper[4725]: I1002 11:51:03.164431 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 02 11:51:03 crc kubenswrapper[4725]: I1002 11:51:03.219081 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.219065113 podStartE2EDuration="37.219065113s" podCreationTimestamp="2025-10-02 11:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:51:03.209673834 +0000 UTC m=+1383.117173317" watchObservedRunningTime="2025-10-02 11:51:03.219065113 +0000 UTC m=+1383.126564566" Oct 02 11:51:04 crc kubenswrapper[4725]: I1002 11:51:04.173790 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc55a26c-8109-4994-812d-1dd87f46d791","Type":"ContainerStarted","Data":"49406da41274f246df2462a40b29e2de83b60704189f7b4a56c74e2d3bca74ca"} Oct 02 11:51:04 crc kubenswrapper[4725]: I1002 11:51:04.174546 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:51:04 crc kubenswrapper[4725]: I1002 11:51:04.204172 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.20415433 podStartE2EDuration="37.20415433s" podCreationTimestamp="2025-10-02 11:50:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 11:51:04.192978613 +0000 UTC m=+1384.100478096" watchObservedRunningTime="2025-10-02 11:51:04.20415433 +0000 UTC m=+1384.111653803" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.336098 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq"] Oct 02 11:51:05 crc kubenswrapper[4725]: E1002 11:51:05.337050 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cb4442-05de-43fa-b0ff-bb12e23ee408" containerName="dnsmasq-dns" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.337077 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cb4442-05de-43fa-b0ff-bb12e23ee408" containerName="dnsmasq-dns" Oct 02 11:51:05 crc kubenswrapper[4725]: E1002 11:51:05.337106 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cb4442-05de-43fa-b0ff-bb12e23ee408" containerName="init" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.337119 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cb4442-05de-43fa-b0ff-bb12e23ee408" containerName="init" Oct 02 11:51:05 crc kubenswrapper[4725]: E1002 11:51:05.337146 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f52b939b-8e72-44df-9079-4152d0b8fb0a" containerName="dnsmasq-dns" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.337158 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f52b939b-8e72-44df-9079-4152d0b8fb0a" containerName="dnsmasq-dns" Oct 02 11:51:05 crc kubenswrapper[4725]: E1002 11:51:05.337196 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f52b939b-8e72-44df-9079-4152d0b8fb0a" containerName="init" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.337208 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f52b939b-8e72-44df-9079-4152d0b8fb0a" containerName="init" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.337521 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="88cb4442-05de-43fa-b0ff-bb12e23ee408" containerName="dnsmasq-dns" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.337565 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f52b939b-8e72-44df-9079-4152d0b8fb0a" containerName="dnsmasq-dns" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.338606 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.342302 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.342334 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.342505 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.343082 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.347372 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq"] Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.492431 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.492489 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.492527 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.492552 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vfn7\" (UniqueName: \"kubernetes.io/projected/d070422a-2b6f-42b9-8765-6f630ad4b68f-kube-api-access-5vfn7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.594187 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.594233 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.594259 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.594954 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vfn7\" (UniqueName: \"kubernetes.io/projected/d070422a-2b6f-42b9-8765-6f630ad4b68f-kube-api-access-5vfn7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.600056 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.605280 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.615985 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.620381 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vfn7\" (UniqueName: \"kubernetes.io/projected/d070422a-2b6f-42b9-8765-6f630ad4b68f-kube-api-access-5vfn7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:05 crc kubenswrapper[4725]: I1002 11:51:05.685634 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:06 crc kubenswrapper[4725]: I1002 11:51:06.240002 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq"] Oct 02 11:51:06 crc kubenswrapper[4725]: W1002 11:51:06.245925 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd070422a_2b6f_42b9_8765_6f630ad4b68f.slice/crio-316c92071052a9ff36fcb7c180d8da45238a161ff3738d6d5fc8e41585088bfd WatchSource:0}: Error finding container 316c92071052a9ff36fcb7c180d8da45238a161ff3738d6d5fc8e41585088bfd: Status 404 returned error can't find the container with id 316c92071052a9ff36fcb7c180d8da45238a161ff3738d6d5fc8e41585088bfd Oct 02 11:51:06 crc kubenswrapper[4725]: I1002 11:51:06.248399 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 11:51:07 crc kubenswrapper[4725]: I1002 11:51:07.208255 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" event={"ID":"d070422a-2b6f-42b9-8765-6f630ad4b68f","Type":"ContainerStarted","Data":"316c92071052a9ff36fcb7c180d8da45238a161ff3738d6d5fc8e41585088bfd"} Oct 02 11:51:15 crc kubenswrapper[4725]: I1002 11:51:15.288335 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" event={"ID":"d070422a-2b6f-42b9-8765-6f630ad4b68f","Type":"ContainerStarted","Data":"10ec15ec123d053c99497e4042105ff9d52ede2d9be516eee1c3f9de137bcd36"} Oct 02 11:51:15 crc kubenswrapper[4725]: I1002 11:51:15.307349 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" podStartSLOduration=1.6734636219999999 podStartE2EDuration="10.307332192s" podCreationTimestamp="2025-10-02 11:51:05 +0000 UTC" firstStartedPulling="2025-10-02 11:51:06.248142345 +0000 UTC m=+1386.155641808" lastFinishedPulling="2025-10-02 11:51:14.882010915 +0000 UTC m=+1394.789510378" observedRunningTime="2025-10-02 11:51:15.302624716 +0000 UTC m=+1395.210124219" watchObservedRunningTime="2025-10-02 11:51:15.307332192 +0000 UTC m=+1395.214831655" Oct 02 11:51:17 crc kubenswrapper[4725]: I1002 11:51:17.198201 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 02 11:51:17 crc kubenswrapper[4725]: I1002 11:51:17.577879 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 02 11:51:26 crc kubenswrapper[4725]: I1002 11:51:26.434288 4725 generic.go:334] "Generic (PLEG): container finished" podID="d070422a-2b6f-42b9-8765-6f630ad4b68f" containerID="10ec15ec123d053c99497e4042105ff9d52ede2d9be516eee1c3f9de137bcd36" exitCode=0 Oct 02 11:51:26 crc kubenswrapper[4725]: I1002 11:51:26.434356 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" event={"ID":"d070422a-2b6f-42b9-8765-6f630ad4b68f","Type":"ContainerDied","Data":"10ec15ec123d053c99497e4042105ff9d52ede2d9be516eee1c3f9de137bcd36"} Oct 02 11:51:27 crc kubenswrapper[4725]: I1002 11:51:27.943449 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.054148 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-ssh-key\") pod \"d070422a-2b6f-42b9-8765-6f630ad4b68f\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.054261 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-repo-setup-combined-ca-bundle\") pod \"d070422a-2b6f-42b9-8765-6f630ad4b68f\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.054405 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-inventory\") pod \"d070422a-2b6f-42b9-8765-6f630ad4b68f\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.054435 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vfn7\" (UniqueName: \"kubernetes.io/projected/d070422a-2b6f-42b9-8765-6f630ad4b68f-kube-api-access-5vfn7\") pod \"d070422a-2b6f-42b9-8765-6f630ad4b68f\" (UID: \"d070422a-2b6f-42b9-8765-6f630ad4b68f\") " Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.066080 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d070422a-2b6f-42b9-8765-6f630ad4b68f-kube-api-access-5vfn7" (OuterVolumeSpecName: "kube-api-access-5vfn7") pod "d070422a-2b6f-42b9-8765-6f630ad4b68f" (UID: "d070422a-2b6f-42b9-8765-6f630ad4b68f"). InnerVolumeSpecName "kube-api-access-5vfn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.078553 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d070422a-2b6f-42b9-8765-6f630ad4b68f" (UID: "d070422a-2b6f-42b9-8765-6f630ad4b68f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.091691 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-inventory" (OuterVolumeSpecName: "inventory") pod "d070422a-2b6f-42b9-8765-6f630ad4b68f" (UID: "d070422a-2b6f-42b9-8765-6f630ad4b68f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.092792 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d070422a-2b6f-42b9-8765-6f630ad4b68f" (UID: "d070422a-2b6f-42b9-8765-6f630ad4b68f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.156638 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vfn7\" (UniqueName: \"kubernetes.io/projected/d070422a-2b6f-42b9-8765-6f630ad4b68f-kube-api-access-5vfn7\") on node \"crc\" DevicePath \"\"" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.156685 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.156695 4725 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.156707 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d070422a-2b6f-42b9-8765-6f630ad4b68f-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.457390 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" event={"ID":"d070422a-2b6f-42b9-8765-6f630ad4b68f","Type":"ContainerDied","Data":"316c92071052a9ff36fcb7c180d8da45238a161ff3738d6d5fc8e41585088bfd"} Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.457437 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316c92071052a9ff36fcb7c180d8da45238a161ff3738d6d5fc8e41585088bfd" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.457705 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.564788 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt"] Oct 02 11:51:28 crc kubenswrapper[4725]: E1002 11:51:28.565447 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d070422a-2b6f-42b9-8765-6f630ad4b68f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.565469 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d070422a-2b6f-42b9-8765-6f630ad4b68f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.565684 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d070422a-2b6f-42b9-8765-6f630ad4b68f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.566537 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.569303 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.569456 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.569679 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.574401 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.576550 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt"] Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.675796 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qt8kt\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.676041 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qt8kt\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.676138 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gmpt\" (UniqueName: \"kubernetes.io/projected/c238f16b-d636-421b-bbbf-53870c63c217-kube-api-access-9gmpt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qt8kt\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.777640 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gmpt\" (UniqueName: \"kubernetes.io/projected/c238f16b-d636-421b-bbbf-53870c63c217-kube-api-access-9gmpt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qt8kt\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.777819 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qt8kt\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.777872 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qt8kt\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.781481 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qt8kt\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.783219 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qt8kt\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.807067 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gmpt\" (UniqueName: \"kubernetes.io/projected/c238f16b-d636-421b-bbbf-53870c63c217-kube-api-access-9gmpt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qt8kt\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:28 crc kubenswrapper[4725]: I1002 11:51:28.898689 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:29 crc kubenswrapper[4725]: I1002 11:51:29.409330 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt"] Oct 02 11:51:29 crc kubenswrapper[4725]: I1002 11:51:29.468738 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" event={"ID":"c238f16b-d636-421b-bbbf-53870c63c217","Type":"ContainerStarted","Data":"054ee81357144b8f8d43776dc7b28cfc700d30545b4bacfe0b92b5561fc53611"} Oct 02 11:51:30 crc kubenswrapper[4725]: I1002 11:51:30.482244 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" event={"ID":"c238f16b-d636-421b-bbbf-53870c63c217","Type":"ContainerStarted","Data":"48035bc9f491b1cb6576d1e8e5e468d20743dba26fb7a9ae5782c5ac4b4de5b1"} Oct 02 11:51:30 crc kubenswrapper[4725]: I1002 11:51:30.505908 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" podStartSLOduration=2.099574028 podStartE2EDuration="2.505873319s" podCreationTimestamp="2025-10-02 11:51:28 +0000 UTC" firstStartedPulling="2025-10-02 11:51:29.432770983 +0000 UTC m=+1409.340270456" lastFinishedPulling="2025-10-02 11:51:29.839070274 +0000 UTC m=+1409.746569747" observedRunningTime="2025-10-02 11:51:30.498298927 +0000 UTC m=+1410.405798400" watchObservedRunningTime="2025-10-02 11:51:30.505873319 +0000 UTC m=+1410.413372822" Oct 02 11:51:33 crc kubenswrapper[4725]: I1002 11:51:33.520402 4725 generic.go:334] "Generic (PLEG): container finished" podID="c238f16b-d636-421b-bbbf-53870c63c217" containerID="48035bc9f491b1cb6576d1e8e5e468d20743dba26fb7a9ae5782c5ac4b4de5b1" exitCode=0 Oct 02 11:51:33 crc kubenswrapper[4725]: I1002 11:51:33.520490 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" event={"ID":"c238f16b-d636-421b-bbbf-53870c63c217","Type":"ContainerDied","Data":"48035bc9f491b1cb6576d1e8e5e468d20743dba26fb7a9ae5782c5ac4b4de5b1"} Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.028046 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.128850 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gmpt\" (UniqueName: \"kubernetes.io/projected/c238f16b-d636-421b-bbbf-53870c63c217-kube-api-access-9gmpt\") pod \"c238f16b-d636-421b-bbbf-53870c63c217\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.129057 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-inventory\") pod \"c238f16b-d636-421b-bbbf-53870c63c217\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.129104 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-ssh-key\") pod \"c238f16b-d636-421b-bbbf-53870c63c217\" (UID: \"c238f16b-d636-421b-bbbf-53870c63c217\") " Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.134002 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c238f16b-d636-421b-bbbf-53870c63c217-kube-api-access-9gmpt" (OuterVolumeSpecName: "kube-api-access-9gmpt") pod "c238f16b-d636-421b-bbbf-53870c63c217" (UID: "c238f16b-d636-421b-bbbf-53870c63c217"). InnerVolumeSpecName "kube-api-access-9gmpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.156277 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-inventory" (OuterVolumeSpecName: "inventory") pod "c238f16b-d636-421b-bbbf-53870c63c217" (UID: "c238f16b-d636-421b-bbbf-53870c63c217"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.188035 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c238f16b-d636-421b-bbbf-53870c63c217" (UID: "c238f16b-d636-421b-bbbf-53870c63c217"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.231405 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.231440 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c238f16b-d636-421b-bbbf-53870c63c217-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.231449 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gmpt\" (UniqueName: \"kubernetes.io/projected/c238f16b-d636-421b-bbbf-53870c63c217-kube-api-access-9gmpt\") on node \"crc\" DevicePath \"\"" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.542893 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" event={"ID":"c238f16b-d636-421b-bbbf-53870c63c217","Type":"ContainerDied","Data":"054ee81357144b8f8d43776dc7b28cfc700d30545b4bacfe0b92b5561fc53611"} Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.543222 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="054ee81357144b8f8d43776dc7b28cfc700d30545b4bacfe0b92b5561fc53611" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.542956 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qt8kt" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.610658 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5"] Oct 02 11:51:35 crc kubenswrapper[4725]: E1002 11:51:35.611134 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c238f16b-d636-421b-bbbf-53870c63c217" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.611159 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c238f16b-d636-421b-bbbf-53870c63c217" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.611381 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c238f16b-d636-421b-bbbf-53870c63c217" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.612046 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.617335 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.617381 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.617675 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.619791 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.631505 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5"] Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.741164 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.741421 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pms9c\" (UniqueName: \"kubernetes.io/projected/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-kube-api-access-pms9c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.741556 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.741636 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.843016 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.843137 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.843211 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pms9c\" (UniqueName: \"kubernetes.io/projected/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-kube-api-access-pms9c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.843319 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.847180 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.847235 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.852392 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.860453 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pms9c\" (UniqueName: \"kubernetes.io/projected/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-kube-api-access-pms9c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:35 crc kubenswrapper[4725]: I1002 11:51:35.939432 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:51:36 crc kubenswrapper[4725]: I1002 11:51:36.447907 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5"] Oct 02 11:51:36 crc kubenswrapper[4725]: I1002 11:51:36.552519 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" event={"ID":"58fb4d5d-e01c-4ede-91c8-9674a71c34a1","Type":"ContainerStarted","Data":"cb56a113df94788683132b2ee5fb3bdeaaeb9159e0803602cdb491ecb65eb77d"} Oct 02 11:51:37 crc kubenswrapper[4725]: I1002 11:51:37.563563 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" event={"ID":"58fb4d5d-e01c-4ede-91c8-9674a71c34a1","Type":"ContainerStarted","Data":"ea2018ee25f875a1e7b8c55a83ec89241cdd925b154b4f6021020175cca5ba70"} Oct 02 11:51:37 crc kubenswrapper[4725]: I1002 11:51:37.587425 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" podStartSLOduration=1.913765699 podStartE2EDuration="2.587406596s" podCreationTimestamp="2025-10-02 11:51:35 +0000 UTC" firstStartedPulling="2025-10-02 11:51:36.448811698 +0000 UTC m=+1416.356311161" lastFinishedPulling="2025-10-02 11:51:37.122452575 +0000 UTC m=+1417.029952058" observedRunningTime="2025-10-02 11:51:37.583246585 +0000 UTC m=+1417.490746048" watchObservedRunningTime="2025-10-02 11:51:37.587406596 +0000 UTC m=+1417.494906059" Oct 02 11:51:44 crc kubenswrapper[4725]: I1002 11:51:44.978658 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:51:44 crc kubenswrapper[4725]: I1002 11:51:44.980577 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:52:14 crc kubenswrapper[4725]: I1002 11:52:14.978483 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:52:14 crc kubenswrapper[4725]: I1002 11:52:14.979086 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:52:33 crc kubenswrapper[4725]: I1002 11:52:33.250947 4725 scope.go:117] "RemoveContainer" containerID="20318190e9712d50e65e207c5c89bb0086328f99e3893e0a975ada92379c84e1" Oct 02 11:52:33 crc kubenswrapper[4725]: I1002 11:52:33.289374 4725 scope.go:117] "RemoveContainer" containerID="7a497cedd27c199a591fb3a06d42c73d74bf658c1b0dbb5ed50c365c1e124b5f" Oct 02 11:52:33 crc kubenswrapper[4725]: I1002 11:52:33.398923 4725 scope.go:117] "RemoveContainer" containerID="57ac9da0dbd1635fa7b9e46db2775869f5f26574fba5c114e5f4b5b49e01678d" Oct 02 11:52:44 crc kubenswrapper[4725]: I1002 11:52:44.978004 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:52:44 crc kubenswrapper[4725]: I1002 11:52:44.978806 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:52:44 crc kubenswrapper[4725]: I1002 11:52:44.978889 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:52:44 crc kubenswrapper[4725]: I1002 11:52:44.980293 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a16429b8c29887ced1bf90fd0b8fd7b846f3af47c6a14a9d7929f1cccc3dad08"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 11:52:44 crc kubenswrapper[4725]: I1002 11:52:44.981471 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://a16429b8c29887ced1bf90fd0b8fd7b846f3af47c6a14a9d7929f1cccc3dad08" gracePeriod=600 Oct 02 11:52:45 crc kubenswrapper[4725]: E1002 11:52:45.176398 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e9bad7c_78f8_435d_8449_7c5b04a16869.slice/crio-a16429b8c29887ced1bf90fd0b8fd7b846f3af47c6a14a9d7929f1cccc3dad08.scope\": RecentStats: unable to find data in memory cache]" Oct 02 11:52:45 crc kubenswrapper[4725]: I1002 11:52:45.268059 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="a16429b8c29887ced1bf90fd0b8fd7b846f3af47c6a14a9d7929f1cccc3dad08" exitCode=0 Oct 02 11:52:45 crc kubenswrapper[4725]: I1002 11:52:45.288470 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"a16429b8c29887ced1bf90fd0b8fd7b846f3af47c6a14a9d7929f1cccc3dad08"} Oct 02 11:52:45 crc kubenswrapper[4725]: I1002 11:52:45.288556 4725 scope.go:117] "RemoveContainer" containerID="3af7e2353d25f82db2912213e59998b17803f7721dc32c996865e9d5b9f6bacc" Oct 02 11:52:46 crc kubenswrapper[4725]: I1002 11:52:46.278984 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d"} Oct 02 11:53:03 crc kubenswrapper[4725]: I1002 11:53:03.973377 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t9qlv"] Oct 02 11:53:03 crc kubenswrapper[4725]: I1002 11:53:03.976112 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:03.985791 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9qlv"] Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:04.160845 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-utilities\") pod \"redhat-operators-t9qlv\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:04.160937 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-catalog-content\") pod \"redhat-operators-t9qlv\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:04.160964 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp2g5\" (UniqueName: \"kubernetes.io/projected/d4cf402d-2f30-4fa2-af30-33efa716d0f7-kube-api-access-fp2g5\") pod \"redhat-operators-t9qlv\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:04.262479 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-utilities\") pod \"redhat-operators-t9qlv\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:04.262791 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-catalog-content\") pod \"redhat-operators-t9qlv\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:04.262816 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp2g5\" (UniqueName: \"kubernetes.io/projected/d4cf402d-2f30-4fa2-af30-33efa716d0f7-kube-api-access-fp2g5\") pod \"redhat-operators-t9qlv\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:04.263204 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-utilities\") pod \"redhat-operators-t9qlv\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:04.263221 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-catalog-content\") pod \"redhat-operators-t9qlv\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:04.289945 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp2g5\" (UniqueName: \"kubernetes.io/projected/d4cf402d-2f30-4fa2-af30-33efa716d0f7-kube-api-access-fp2g5\") pod \"redhat-operators-t9qlv\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:04.329619 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:04 crc kubenswrapper[4725]: I1002 11:53:04.768100 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9qlv"] Oct 02 11:53:05 crc kubenswrapper[4725]: I1002 11:53:05.483932 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerID="6a1186f1bd0b7e6edaa856a159fa33ad7586d4e895da783396fdcb9b3689afac" exitCode=0 Oct 02 11:53:05 crc kubenswrapper[4725]: I1002 11:53:05.484002 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9qlv" event={"ID":"d4cf402d-2f30-4fa2-af30-33efa716d0f7","Type":"ContainerDied","Data":"6a1186f1bd0b7e6edaa856a159fa33ad7586d4e895da783396fdcb9b3689afac"} Oct 02 11:53:05 crc kubenswrapper[4725]: I1002 11:53:05.484043 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9qlv" event={"ID":"d4cf402d-2f30-4fa2-af30-33efa716d0f7","Type":"ContainerStarted","Data":"4944b139417863e9de564d06263731d5dae4370cd2891afb1f04b9e2ab2dd75a"} Oct 02 11:53:07 crc kubenswrapper[4725]: I1002 11:53:07.511020 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerID="de420cbdc32b8accb259ef4d23154a6be3622f58455f4b4441aa0c3d2ff298fe" exitCode=0 Oct 02 11:53:07 crc kubenswrapper[4725]: I1002 11:53:07.511139 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9qlv" event={"ID":"d4cf402d-2f30-4fa2-af30-33efa716d0f7","Type":"ContainerDied","Data":"de420cbdc32b8accb259ef4d23154a6be3622f58455f4b4441aa0c3d2ff298fe"} Oct 02 11:53:09 crc kubenswrapper[4725]: I1002 11:53:09.537996 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9qlv" event={"ID":"d4cf402d-2f30-4fa2-af30-33efa716d0f7","Type":"ContainerStarted","Data":"1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590"} Oct 02 11:53:09 crc kubenswrapper[4725]: I1002 11:53:09.559441 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t9qlv" podStartSLOduration=4.019496858 podStartE2EDuration="6.559421129s" podCreationTimestamp="2025-10-02 11:53:03 +0000 UTC" firstStartedPulling="2025-10-02 11:53:05.486228835 +0000 UTC m=+1505.393728328" lastFinishedPulling="2025-10-02 11:53:08.026153096 +0000 UTC m=+1507.933652599" observedRunningTime="2025-10-02 11:53:09.558321258 +0000 UTC m=+1509.465820761" watchObservedRunningTime="2025-10-02 11:53:09.559421129 +0000 UTC m=+1509.466920612" Oct 02 11:53:11 crc kubenswrapper[4725]: I1002 11:53:11.762390 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lhzlm"] Oct 02 11:53:11 crc kubenswrapper[4725]: I1002 11:53:11.765430 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:11 crc kubenswrapper[4725]: I1002 11:53:11.779642 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lhzlm"] Oct 02 11:53:11 crc kubenswrapper[4725]: I1002 11:53:11.933822 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-catalog-content\") pod \"certified-operators-lhzlm\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:11 crc kubenswrapper[4725]: I1002 11:53:11.933905 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfns2\" (UniqueName: \"kubernetes.io/projected/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-kube-api-access-jfns2\") pod \"certified-operators-lhzlm\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:11 crc kubenswrapper[4725]: I1002 11:53:11.934136 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-utilities\") pod \"certified-operators-lhzlm\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:12 crc kubenswrapper[4725]: I1002 11:53:12.035566 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-utilities\") pod \"certified-operators-lhzlm\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:12 crc kubenswrapper[4725]: I1002 11:53:12.035632 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-catalog-content\") pod \"certified-operators-lhzlm\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:12 crc kubenswrapper[4725]: I1002 11:53:12.035688 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfns2\" (UniqueName: \"kubernetes.io/projected/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-kube-api-access-jfns2\") pod \"certified-operators-lhzlm\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:12 crc kubenswrapper[4725]: I1002 11:53:12.036171 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-catalog-content\") pod \"certified-operators-lhzlm\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:12 crc kubenswrapper[4725]: I1002 11:53:12.036225 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-utilities\") pod \"certified-operators-lhzlm\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:12 crc kubenswrapper[4725]: I1002 11:53:12.054870 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfns2\" (UniqueName: \"kubernetes.io/projected/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-kube-api-access-jfns2\") pod \"certified-operators-lhzlm\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:12 crc kubenswrapper[4725]: I1002 11:53:12.087198 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:12 crc kubenswrapper[4725]: I1002 11:53:12.613579 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lhzlm"] Oct 02 11:53:13 crc kubenswrapper[4725]: I1002 11:53:13.580521 4725 generic.go:334] "Generic (PLEG): container finished" podID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" containerID="f5ec73ff41a647e5b16785fa7dea8a6f94f34010e0745e8219dd216c8982b1d3" exitCode=0 Oct 02 11:53:13 crc kubenswrapper[4725]: I1002 11:53:13.580673 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzlm" event={"ID":"214fc6c4-aebf-4fb6-af82-116a4b10a7cf","Type":"ContainerDied","Data":"f5ec73ff41a647e5b16785fa7dea8a6f94f34010e0745e8219dd216c8982b1d3"} Oct 02 11:53:13 crc kubenswrapper[4725]: I1002 11:53:13.580866 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzlm" event={"ID":"214fc6c4-aebf-4fb6-af82-116a4b10a7cf","Type":"ContainerStarted","Data":"77b05895f4b0985afe7c607c176fabc2bd1afe1857bd729d10c9b01d47b9b7f1"} Oct 02 11:53:14 crc kubenswrapper[4725]: I1002 11:53:14.330423 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:14 crc kubenswrapper[4725]: I1002 11:53:14.330489 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:14 crc kubenswrapper[4725]: I1002 11:53:14.594471 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzlm" event={"ID":"214fc6c4-aebf-4fb6-af82-116a4b10a7cf","Type":"ContainerStarted","Data":"1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33"} Oct 02 11:53:15 crc kubenswrapper[4725]: I1002 11:53:15.386899 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t9qlv" podUID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerName="registry-server" probeResult="failure" output=< Oct 02 11:53:15 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Oct 02 11:53:15 crc kubenswrapper[4725]: > Oct 02 11:53:15 crc kubenswrapper[4725]: I1002 11:53:15.606636 4725 generic.go:334] "Generic (PLEG): container finished" podID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" containerID="1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33" exitCode=0 Oct 02 11:53:15 crc kubenswrapper[4725]: I1002 11:53:15.606683 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzlm" event={"ID":"214fc6c4-aebf-4fb6-af82-116a4b10a7cf","Type":"ContainerDied","Data":"1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33"} Oct 02 11:53:16 crc kubenswrapper[4725]: I1002 11:53:16.618107 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzlm" event={"ID":"214fc6c4-aebf-4fb6-af82-116a4b10a7cf","Type":"ContainerStarted","Data":"1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0"} Oct 02 11:53:16 crc kubenswrapper[4725]: I1002 11:53:16.639969 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lhzlm" podStartSLOduration=2.963565597 podStartE2EDuration="5.639951528s" podCreationTimestamp="2025-10-02 11:53:11 +0000 UTC" firstStartedPulling="2025-10-02 11:53:13.582479186 +0000 UTC m=+1513.489978649" lastFinishedPulling="2025-10-02 11:53:16.258865097 +0000 UTC m=+1516.166364580" observedRunningTime="2025-10-02 11:53:16.639608188 +0000 UTC m=+1516.547107681" watchObservedRunningTime="2025-10-02 11:53:16.639951528 +0000 UTC m=+1516.547450991" Oct 02 11:53:22 crc kubenswrapper[4725]: I1002 11:53:22.087503 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:22 crc kubenswrapper[4725]: I1002 11:53:22.088219 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:22 crc kubenswrapper[4725]: I1002 11:53:22.149568 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:22 crc kubenswrapper[4725]: I1002 11:53:22.756740 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:22 crc kubenswrapper[4725]: I1002 11:53:22.816870 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lhzlm"] Oct 02 11:53:24 crc kubenswrapper[4725]: I1002 11:53:24.377128 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:24 crc kubenswrapper[4725]: I1002 11:53:24.430109 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:24 crc kubenswrapper[4725]: I1002 11:53:24.722761 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lhzlm" podUID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" containerName="registry-server" containerID="cri-o://1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0" gracePeriod=2 Oct 02 11:53:24 crc kubenswrapper[4725]: I1002 11:53:24.789781 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9qlv"] Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.269363 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.484529 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-utilities\") pod \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.484592 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfns2\" (UniqueName: \"kubernetes.io/projected/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-kube-api-access-jfns2\") pod \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.484658 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-catalog-content\") pod \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\" (UID: \"214fc6c4-aebf-4fb6-af82-116a4b10a7cf\") " Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.485463 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-utilities" (OuterVolumeSpecName: "utilities") pod "214fc6c4-aebf-4fb6-af82-116a4b10a7cf" (UID: "214fc6c4-aebf-4fb6-af82-116a4b10a7cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.492622 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-kube-api-access-jfns2" (OuterVolumeSpecName: "kube-api-access-jfns2") pod "214fc6c4-aebf-4fb6-af82-116a4b10a7cf" (UID: "214fc6c4-aebf-4fb6-af82-116a4b10a7cf"). InnerVolumeSpecName "kube-api-access-jfns2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.532292 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "214fc6c4-aebf-4fb6-af82-116a4b10a7cf" (UID: "214fc6c4-aebf-4fb6-af82-116a4b10a7cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.587854 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.587922 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.587933 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfns2\" (UniqueName: \"kubernetes.io/projected/214fc6c4-aebf-4fb6-af82-116a4b10a7cf-kube-api-access-jfns2\") on node \"crc\" DevicePath \"\"" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.744696 4725 generic.go:334] "Generic (PLEG): container finished" podID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" containerID="1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0" exitCode=0 Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.744796 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhzlm" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.744781 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzlm" event={"ID":"214fc6c4-aebf-4fb6-af82-116a4b10a7cf","Type":"ContainerDied","Data":"1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0"} Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.744990 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhzlm" event={"ID":"214fc6c4-aebf-4fb6-af82-116a4b10a7cf","Type":"ContainerDied","Data":"77b05895f4b0985afe7c607c176fabc2bd1afe1857bd729d10c9b01d47b9b7f1"} Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.745018 4725 scope.go:117] "RemoveContainer" containerID="1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.745378 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t9qlv" podUID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerName="registry-server" containerID="cri-o://1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590" gracePeriod=2 Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.783637 4725 scope.go:117] "RemoveContainer" containerID="1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.786820 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lhzlm"] Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.793832 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lhzlm"] Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.856190 4725 scope.go:117] "RemoveContainer" containerID="f5ec73ff41a647e5b16785fa7dea8a6f94f34010e0745e8219dd216c8982b1d3" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.946521 4725 scope.go:117] "RemoveContainer" containerID="1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0" Oct 02 11:53:25 crc kubenswrapper[4725]: E1002 11:53:25.948997 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0\": container with ID starting with 1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0 not found: ID does not exist" containerID="1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.949098 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0"} err="failed to get container status \"1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0\": rpc error: code = NotFound desc = could not find container \"1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0\": container with ID starting with 1f0bcb6207e0b234e4a058aa9079abe554d2b9f37cbabd7484763d094ea314e0 not found: ID does not exist" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.949135 4725 scope.go:117] "RemoveContainer" containerID="1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33" Oct 02 11:53:25 crc kubenswrapper[4725]: E1002 11:53:25.949601 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33\": container with ID starting with 1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33 not found: ID does not exist" containerID="1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.949646 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33"} err="failed to get container status \"1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33\": rpc error: code = NotFound desc = could not find container \"1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33\": container with ID starting with 1e950489ce9c8b1c6c1705c1302e2a18b6a1141141dd35f4bf63a849dcb0ec33 not found: ID does not exist" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.949683 4725 scope.go:117] "RemoveContainer" containerID="f5ec73ff41a647e5b16785fa7dea8a6f94f34010e0745e8219dd216c8982b1d3" Oct 02 11:53:25 crc kubenswrapper[4725]: E1002 11:53:25.950265 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5ec73ff41a647e5b16785fa7dea8a6f94f34010e0745e8219dd216c8982b1d3\": container with ID starting with f5ec73ff41a647e5b16785fa7dea8a6f94f34010e0745e8219dd216c8982b1d3 not found: ID does not exist" containerID="f5ec73ff41a647e5b16785fa7dea8a6f94f34010e0745e8219dd216c8982b1d3" Oct 02 11:53:25 crc kubenswrapper[4725]: I1002 11:53:25.950349 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5ec73ff41a647e5b16785fa7dea8a6f94f34010e0745e8219dd216c8982b1d3"} err="failed to get container status \"f5ec73ff41a647e5b16785fa7dea8a6f94f34010e0745e8219dd216c8982b1d3\": rpc error: code = NotFound desc = could not find container \"f5ec73ff41a647e5b16785fa7dea8a6f94f34010e0745e8219dd216c8982b1d3\": container with ID starting with f5ec73ff41a647e5b16785fa7dea8a6f94f34010e0745e8219dd216c8982b1d3 not found: ID does not exist" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.220208 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.402440 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp2g5\" (UniqueName: \"kubernetes.io/projected/d4cf402d-2f30-4fa2-af30-33efa716d0f7-kube-api-access-fp2g5\") pod \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.402639 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-catalog-content\") pod \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.403115 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-utilities\") pod \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\" (UID: \"d4cf402d-2f30-4fa2-af30-33efa716d0f7\") " Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.405942 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-utilities" (OuterVolumeSpecName: "utilities") pod "d4cf402d-2f30-4fa2-af30-33efa716d0f7" (UID: "d4cf402d-2f30-4fa2-af30-33efa716d0f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.409239 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4cf402d-2f30-4fa2-af30-33efa716d0f7-kube-api-access-fp2g5" (OuterVolumeSpecName: "kube-api-access-fp2g5") pod "d4cf402d-2f30-4fa2-af30-33efa716d0f7" (UID: "d4cf402d-2f30-4fa2-af30-33efa716d0f7"). InnerVolumeSpecName "kube-api-access-fp2g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.483294 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4cf402d-2f30-4fa2-af30-33efa716d0f7" (UID: "d4cf402d-2f30-4fa2-af30-33efa716d0f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.506187 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.506268 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp2g5\" (UniqueName: \"kubernetes.io/projected/d4cf402d-2f30-4fa2-af30-33efa716d0f7-kube-api-access-fp2g5\") on node \"crc\" DevicePath \"\"" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.506291 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4cf402d-2f30-4fa2-af30-33efa716d0f7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.760043 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerID="1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590" exitCode=0 Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.760114 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9qlv" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.760115 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9qlv" event={"ID":"d4cf402d-2f30-4fa2-af30-33efa716d0f7","Type":"ContainerDied","Data":"1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590"} Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.760208 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9qlv" event={"ID":"d4cf402d-2f30-4fa2-af30-33efa716d0f7","Type":"ContainerDied","Data":"4944b139417863e9de564d06263731d5dae4370cd2891afb1f04b9e2ab2dd75a"} Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.760258 4725 scope.go:117] "RemoveContainer" containerID="1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.794759 4725 scope.go:117] "RemoveContainer" containerID="de420cbdc32b8accb259ef4d23154a6be3622f58455f4b4441aa0c3d2ff298fe" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.798061 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9qlv"] Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.806920 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t9qlv"] Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.819759 4725 scope.go:117] "RemoveContainer" containerID="6a1186f1bd0b7e6edaa856a159fa33ad7586d4e895da783396fdcb9b3689afac" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.843794 4725 scope.go:117] "RemoveContainer" containerID="1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590" Oct 02 11:53:26 crc kubenswrapper[4725]: E1002 11:53:26.846275 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590\": container with ID starting with 1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590 not found: ID does not exist" containerID="1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.846311 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590"} err="failed to get container status \"1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590\": rpc error: code = NotFound desc = could not find container \"1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590\": container with ID starting with 1f58a1b50896dd799d9ab546e24d8a129a689843eadf950b58d15c1f8e914590 not found: ID does not exist" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.846332 4725 scope.go:117] "RemoveContainer" containerID="de420cbdc32b8accb259ef4d23154a6be3622f58455f4b4441aa0c3d2ff298fe" Oct 02 11:53:26 crc kubenswrapper[4725]: E1002 11:53:26.846792 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de420cbdc32b8accb259ef4d23154a6be3622f58455f4b4441aa0c3d2ff298fe\": container with ID starting with de420cbdc32b8accb259ef4d23154a6be3622f58455f4b4441aa0c3d2ff298fe not found: ID does not exist" containerID="de420cbdc32b8accb259ef4d23154a6be3622f58455f4b4441aa0c3d2ff298fe" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.846845 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de420cbdc32b8accb259ef4d23154a6be3622f58455f4b4441aa0c3d2ff298fe"} err="failed to get container status \"de420cbdc32b8accb259ef4d23154a6be3622f58455f4b4441aa0c3d2ff298fe\": rpc error: code = NotFound desc = could not find container \"de420cbdc32b8accb259ef4d23154a6be3622f58455f4b4441aa0c3d2ff298fe\": container with ID starting with de420cbdc32b8accb259ef4d23154a6be3622f58455f4b4441aa0c3d2ff298fe not found: ID does not exist" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.846881 4725 scope.go:117] "RemoveContainer" containerID="6a1186f1bd0b7e6edaa856a159fa33ad7586d4e895da783396fdcb9b3689afac" Oct 02 11:53:26 crc kubenswrapper[4725]: E1002 11:53:26.847221 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a1186f1bd0b7e6edaa856a159fa33ad7586d4e895da783396fdcb9b3689afac\": container with ID starting with 6a1186f1bd0b7e6edaa856a159fa33ad7586d4e895da783396fdcb9b3689afac not found: ID does not exist" containerID="6a1186f1bd0b7e6edaa856a159fa33ad7586d4e895da783396fdcb9b3689afac" Oct 02 11:53:26 crc kubenswrapper[4725]: I1002 11:53:26.847274 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a1186f1bd0b7e6edaa856a159fa33ad7586d4e895da783396fdcb9b3689afac"} err="failed to get container status \"6a1186f1bd0b7e6edaa856a159fa33ad7586d4e895da783396fdcb9b3689afac\": rpc error: code = NotFound desc = could not find container \"6a1186f1bd0b7e6edaa856a159fa33ad7586d4e895da783396fdcb9b3689afac\": container with ID starting with 6a1186f1bd0b7e6edaa856a159fa33ad7586d4e895da783396fdcb9b3689afac not found: ID does not exist" Oct 02 11:53:27 crc kubenswrapper[4725]: I1002 11:53:27.280630 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" path="/var/lib/kubelet/pods/214fc6c4-aebf-4fb6-af82-116a4b10a7cf/volumes" Oct 02 11:53:27 crc kubenswrapper[4725]: I1002 11:53:27.281570 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" path="/var/lib/kubelet/pods/d4cf402d-2f30-4fa2-af30-33efa716d0f7/volumes" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.517634 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hn98s"] Oct 02 11:54:10 crc kubenswrapper[4725]: E1002 11:54:10.518684 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerName="registry-server" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.518700 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerName="registry-server" Oct 02 11:54:10 crc kubenswrapper[4725]: E1002 11:54:10.518708 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" containerName="extract-content" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.518740 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" containerName="extract-content" Oct 02 11:54:10 crc kubenswrapper[4725]: E1002 11:54:10.518757 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" containerName="extract-utilities" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.518765 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" containerName="extract-utilities" Oct 02 11:54:10 crc kubenswrapper[4725]: E1002 11:54:10.518784 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerName="extract-utilities" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.518791 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerName="extract-utilities" Oct 02 11:54:10 crc kubenswrapper[4725]: E1002 11:54:10.518807 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" containerName="registry-server" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.518814 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" containerName="registry-server" Oct 02 11:54:10 crc kubenswrapper[4725]: E1002 11:54:10.518835 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerName="extract-content" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.518841 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerName="extract-content" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.519064 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="214fc6c4-aebf-4fb6-af82-116a4b10a7cf" containerName="registry-server" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.519076 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cf402d-2f30-4fa2-af30-33efa716d0f7" containerName="registry-server" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.520549 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.547761 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn98s"] Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.685943 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-catalog-content\") pod \"community-operators-hn98s\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.686213 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-utilities\") pod \"community-operators-hn98s\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.686261 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t8xf\" (UniqueName: \"kubernetes.io/projected/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-kube-api-access-8t8xf\") pod \"community-operators-hn98s\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.789349 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-utilities\") pod \"community-operators-hn98s\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.789406 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t8xf\" (UniqueName: \"kubernetes.io/projected/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-kube-api-access-8t8xf\") pod \"community-operators-hn98s\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.789473 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-catalog-content\") pod \"community-operators-hn98s\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.789906 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-utilities\") pod \"community-operators-hn98s\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.790044 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-catalog-content\") pod \"community-operators-hn98s\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.811822 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t8xf\" (UniqueName: \"kubernetes.io/projected/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-kube-api-access-8t8xf\") pod \"community-operators-hn98s\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:10 crc kubenswrapper[4725]: I1002 11:54:10.845037 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:11 crc kubenswrapper[4725]: I1002 11:54:11.352147 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn98s"] Oct 02 11:54:12 crc kubenswrapper[4725]: I1002 11:54:12.248988 4725 generic.go:334] "Generic (PLEG): container finished" podID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" containerID="4bb8ce91e9cfab9ce667242c0e60ddcc202a3290dc56384facf24a0e8b5338c4" exitCode=0 Oct 02 11:54:12 crc kubenswrapper[4725]: I1002 11:54:12.249069 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn98s" event={"ID":"4873f30f-f1bd-4d0e-b71d-363c03d2c09d","Type":"ContainerDied","Data":"4bb8ce91e9cfab9ce667242c0e60ddcc202a3290dc56384facf24a0e8b5338c4"} Oct 02 11:54:12 crc kubenswrapper[4725]: I1002 11:54:12.249502 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn98s" event={"ID":"4873f30f-f1bd-4d0e-b71d-363c03d2c09d","Type":"ContainerStarted","Data":"b688f93a46748abd3e86c7b1e27062171a2b67fd53c6c1d105bbe50e9bbb3411"} Oct 02 11:54:13 crc kubenswrapper[4725]: I1002 11:54:13.259000 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn98s" event={"ID":"4873f30f-f1bd-4d0e-b71d-363c03d2c09d","Type":"ContainerStarted","Data":"cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d"} Oct 02 11:54:14 crc kubenswrapper[4725]: I1002 11:54:14.272872 4725 generic.go:334] "Generic (PLEG): container finished" podID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" containerID="cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d" exitCode=0 Oct 02 11:54:14 crc kubenswrapper[4725]: I1002 11:54:14.272965 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn98s" event={"ID":"4873f30f-f1bd-4d0e-b71d-363c03d2c09d","Type":"ContainerDied","Data":"cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d"} Oct 02 11:54:15 crc kubenswrapper[4725]: I1002 11:54:15.287077 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn98s" event={"ID":"4873f30f-f1bd-4d0e-b71d-363c03d2c09d","Type":"ContainerStarted","Data":"b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32"} Oct 02 11:54:15 crc kubenswrapper[4725]: I1002 11:54:15.306338 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hn98s" podStartSLOduration=2.771606951 podStartE2EDuration="5.306322599s" podCreationTimestamp="2025-10-02 11:54:10 +0000 UTC" firstStartedPulling="2025-10-02 11:54:12.251435822 +0000 UTC m=+1572.158935285" lastFinishedPulling="2025-10-02 11:54:14.78615146 +0000 UTC m=+1574.693650933" observedRunningTime="2025-10-02 11:54:15.306291798 +0000 UTC m=+1575.213791301" watchObservedRunningTime="2025-10-02 11:54:15.306322599 +0000 UTC m=+1575.213822062" Oct 02 11:54:20 crc kubenswrapper[4725]: I1002 11:54:20.845983 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:20 crc kubenswrapper[4725]: I1002 11:54:20.846557 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:20 crc kubenswrapper[4725]: I1002 11:54:20.904439 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:21 crc kubenswrapper[4725]: I1002 11:54:21.408576 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:21 crc kubenswrapper[4725]: I1002 11:54:21.461208 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hn98s"] Oct 02 11:54:23 crc kubenswrapper[4725]: I1002 11:54:23.372207 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hn98s" podUID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" containerName="registry-server" containerID="cri-o://b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32" gracePeriod=2 Oct 02 11:54:23 crc kubenswrapper[4725]: I1002 11:54:23.865626 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:23 crc kubenswrapper[4725]: I1002 11:54:23.947805 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t8xf\" (UniqueName: \"kubernetes.io/projected/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-kube-api-access-8t8xf\") pod \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " Oct 02 11:54:23 crc kubenswrapper[4725]: I1002 11:54:23.947878 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-utilities\") pod \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " Oct 02 11:54:23 crc kubenswrapper[4725]: I1002 11:54:23.947927 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-catalog-content\") pod \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\" (UID: \"4873f30f-f1bd-4d0e-b71d-363c03d2c09d\") " Oct 02 11:54:23 crc kubenswrapper[4725]: I1002 11:54:23.948978 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-utilities" (OuterVolumeSpecName: "utilities") pod "4873f30f-f1bd-4d0e-b71d-363c03d2c09d" (UID: "4873f30f-f1bd-4d0e-b71d-363c03d2c09d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:54:23 crc kubenswrapper[4725]: I1002 11:54:23.954666 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-kube-api-access-8t8xf" (OuterVolumeSpecName: "kube-api-access-8t8xf") pod "4873f30f-f1bd-4d0e-b71d-363c03d2c09d" (UID: "4873f30f-f1bd-4d0e-b71d-363c03d2c09d"). InnerVolumeSpecName "kube-api-access-8t8xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.052418 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t8xf\" (UniqueName: \"kubernetes.io/projected/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-kube-api-access-8t8xf\") on node \"crc\" DevicePath \"\"" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.052458 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.385872 4725 generic.go:334] "Generic (PLEG): container finished" podID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" containerID="b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32" exitCode=0 Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.385944 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn98s" event={"ID":"4873f30f-f1bd-4d0e-b71d-363c03d2c09d","Type":"ContainerDied","Data":"b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32"} Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.385970 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn98s" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.385998 4725 scope.go:117] "RemoveContainer" containerID="b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.385980 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn98s" event={"ID":"4873f30f-f1bd-4d0e-b71d-363c03d2c09d","Type":"ContainerDied","Data":"b688f93a46748abd3e86c7b1e27062171a2b67fd53c6c1d105bbe50e9bbb3411"} Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.413904 4725 scope.go:117] "RemoveContainer" containerID="cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.447292 4725 scope.go:117] "RemoveContainer" containerID="4bb8ce91e9cfab9ce667242c0e60ddcc202a3290dc56384facf24a0e8b5338c4" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.491937 4725 scope.go:117] "RemoveContainer" containerID="b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32" Oct 02 11:54:24 crc kubenswrapper[4725]: E1002 11:54:24.492465 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32\": container with ID starting with b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32 not found: ID does not exist" containerID="b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.492502 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32"} err="failed to get container status \"b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32\": rpc error: code = NotFound desc = could not find container \"b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32\": container with ID starting with b0565b77e3f693c4d382330f7c145ef7202181791290490df6a25e7410bd8d32 not found: ID does not exist" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.492527 4725 scope.go:117] "RemoveContainer" containerID="cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d" Oct 02 11:54:24 crc kubenswrapper[4725]: E1002 11:54:24.492877 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d\": container with ID starting with cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d not found: ID does not exist" containerID="cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.492908 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d"} err="failed to get container status \"cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d\": rpc error: code = NotFound desc = could not find container \"cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d\": container with ID starting with cc68e0e7bbf6e40db606dc63e27f345804a1c3910e84b447f986cc3262c2f23d not found: ID does not exist" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.492927 4725 scope.go:117] "RemoveContainer" containerID="4bb8ce91e9cfab9ce667242c0e60ddcc202a3290dc56384facf24a0e8b5338c4" Oct 02 11:54:24 crc kubenswrapper[4725]: E1002 11:54:24.493260 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb8ce91e9cfab9ce667242c0e60ddcc202a3290dc56384facf24a0e8b5338c4\": container with ID starting with 4bb8ce91e9cfab9ce667242c0e60ddcc202a3290dc56384facf24a0e8b5338c4 not found: ID does not exist" containerID="4bb8ce91e9cfab9ce667242c0e60ddcc202a3290dc56384facf24a0e8b5338c4" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.493292 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb8ce91e9cfab9ce667242c0e60ddcc202a3290dc56384facf24a0e8b5338c4"} err="failed to get container status \"4bb8ce91e9cfab9ce667242c0e60ddcc202a3290dc56384facf24a0e8b5338c4\": rpc error: code = NotFound desc = could not find container \"4bb8ce91e9cfab9ce667242c0e60ddcc202a3290dc56384facf24a0e8b5338c4\": container with ID starting with 4bb8ce91e9cfab9ce667242c0e60ddcc202a3290dc56384facf24a0e8b5338c4 not found: ID does not exist" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.812953 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4873f30f-f1bd-4d0e-b71d-363c03d2c09d" (UID: "4873f30f-f1bd-4d0e-b71d-363c03d2c09d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 11:54:24 crc kubenswrapper[4725]: I1002 11:54:24.868542 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4873f30f-f1bd-4d0e-b71d-363c03d2c09d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 11:54:25 crc kubenswrapper[4725]: I1002 11:54:25.035303 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hn98s"] Oct 02 11:54:25 crc kubenswrapper[4725]: I1002 11:54:25.048502 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hn98s"] Oct 02 11:54:25 crc kubenswrapper[4725]: I1002 11:54:25.282900 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" path="/var/lib/kubelet/pods/4873f30f-f1bd-4d0e-b71d-363c03d2c09d/volumes" Oct 02 11:54:44 crc kubenswrapper[4725]: I1002 11:54:44.599596 4725 generic.go:334] "Generic (PLEG): container finished" podID="58fb4d5d-e01c-4ede-91c8-9674a71c34a1" containerID="ea2018ee25f875a1e7b8c55a83ec89241cdd925b154b4f6021020175cca5ba70" exitCode=0 Oct 02 11:54:44 crc kubenswrapper[4725]: I1002 11:54:44.599663 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" event={"ID":"58fb4d5d-e01c-4ede-91c8-9674a71c34a1","Type":"ContainerDied","Data":"ea2018ee25f875a1e7b8c55a83ec89241cdd925b154b4f6021020175cca5ba70"} Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.043294 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.109452 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-ssh-key\") pod \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.109547 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-inventory\") pod \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.109929 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-bootstrap-combined-ca-bundle\") pod \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.109984 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pms9c\" (UniqueName: \"kubernetes.io/projected/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-kube-api-access-pms9c\") pod \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\" (UID: \"58fb4d5d-e01c-4ede-91c8-9674a71c34a1\") " Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.121074 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-kube-api-access-pms9c" (OuterVolumeSpecName: "kube-api-access-pms9c") pod "58fb4d5d-e01c-4ede-91c8-9674a71c34a1" (UID: "58fb4d5d-e01c-4ede-91c8-9674a71c34a1"). InnerVolumeSpecName "kube-api-access-pms9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.132212 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "58fb4d5d-e01c-4ede-91c8-9674a71c34a1" (UID: "58fb4d5d-e01c-4ede-91c8-9674a71c34a1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.141988 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-inventory" (OuterVolumeSpecName: "inventory") pod "58fb4d5d-e01c-4ede-91c8-9674a71c34a1" (UID: "58fb4d5d-e01c-4ede-91c8-9674a71c34a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.142004 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "58fb4d5d-e01c-4ede-91c8-9674a71c34a1" (UID: "58fb4d5d-e01c-4ede-91c8-9674a71c34a1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.212217 4725 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.212247 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pms9c\" (UniqueName: \"kubernetes.io/projected/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-kube-api-access-pms9c\") on node \"crc\" DevicePath \"\"" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.212256 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.212266 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58fb4d5d-e01c-4ede-91c8-9674a71c34a1-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.623276 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" event={"ID":"58fb4d5d-e01c-4ede-91c8-9674a71c34a1","Type":"ContainerDied","Data":"cb56a113df94788683132b2ee5fb3bdeaaeb9159e0803602cdb491ecb65eb77d"} Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.623323 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb56a113df94788683132b2ee5fb3bdeaaeb9159e0803602cdb491ecb65eb77d" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.623370 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.711415 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9"] Oct 02 11:54:46 crc kubenswrapper[4725]: E1002 11:54:46.711977 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" containerName="extract-content" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.711999 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" containerName="extract-content" Oct 02 11:54:46 crc kubenswrapper[4725]: E1002 11:54:46.712024 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58fb4d5d-e01c-4ede-91c8-9674a71c34a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.712034 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="58fb4d5d-e01c-4ede-91c8-9674a71c34a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 02 11:54:46 crc kubenswrapper[4725]: E1002 11:54:46.712061 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" containerName="registry-server" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.712068 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" containerName="registry-server" Oct 02 11:54:46 crc kubenswrapper[4725]: E1002 11:54:46.712085 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" containerName="extract-utilities" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.712093 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" containerName="extract-utilities" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.712320 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="58fb4d5d-e01c-4ede-91c8-9674a71c34a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.712344 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4873f30f-f1bd-4d0e-b71d-363c03d2c09d" containerName="registry-server" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.713297 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.717202 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.717375 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.717501 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.717699 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.722033 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.722256 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.722303 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chw7m\" (UniqueName: \"kubernetes.io/projected/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-kube-api-access-chw7m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.728872 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9"] Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.824340 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.824420 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chw7m\" (UniqueName: \"kubernetes.io/projected/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-kube-api-access-chw7m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.824450 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.828432 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.828476 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:54:46 crc kubenswrapper[4725]: I1002 11:54:46.843193 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chw7m\" (UniqueName: \"kubernetes.io/projected/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-kube-api-access-chw7m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:54:47 crc kubenswrapper[4725]: I1002 11:54:47.048372 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:54:47 crc kubenswrapper[4725]: I1002 11:54:47.665196 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9"] Oct 02 11:54:47 crc kubenswrapper[4725]: W1002 11:54:47.666373 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05b5e1c6_efe9_4a6f_a623_c058ae2e301a.slice/crio-557367107b5ff1a373a60daa0f911c54441b9993c43af2055e8a768b143e3b0d WatchSource:0}: Error finding container 557367107b5ff1a373a60daa0f911c54441b9993c43af2055e8a768b143e3b0d: Status 404 returned error can't find the container with id 557367107b5ff1a373a60daa0f911c54441b9993c43af2055e8a768b143e3b0d Oct 02 11:54:48 crc kubenswrapper[4725]: I1002 11:54:48.641585 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" event={"ID":"05b5e1c6-efe9-4a6f-a623-c058ae2e301a","Type":"ContainerStarted","Data":"2167fc9b9479b9fd9bd0cf485ea01fb782fedbce03f31df1a47038d2bd8a6aad"} Oct 02 11:54:48 crc kubenswrapper[4725]: I1002 11:54:48.642214 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" event={"ID":"05b5e1c6-efe9-4a6f-a623-c058ae2e301a","Type":"ContainerStarted","Data":"557367107b5ff1a373a60daa0f911c54441b9993c43af2055e8a768b143e3b0d"} Oct 02 11:54:48 crc kubenswrapper[4725]: I1002 11:54:48.664305 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" podStartSLOduration=2.088718003 podStartE2EDuration="2.664273413s" podCreationTimestamp="2025-10-02 11:54:46 +0000 UTC" firstStartedPulling="2025-10-02 11:54:47.669168408 +0000 UTC m=+1607.576667871" lastFinishedPulling="2025-10-02 11:54:48.244723818 +0000 UTC m=+1608.152223281" observedRunningTime="2025-10-02 11:54:48.655714448 +0000 UTC m=+1608.563213931" watchObservedRunningTime="2025-10-02 11:54:48.664273413 +0000 UTC m=+1608.571772876" Oct 02 11:55:14 crc kubenswrapper[4725]: I1002 11:55:14.977783 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:55:14 crc kubenswrapper[4725]: I1002 11:55:14.978215 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:55:32 crc kubenswrapper[4725]: I1002 11:55:32.050991 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fnh89"] Oct 02 11:55:32 crc kubenswrapper[4725]: I1002 11:55:32.066644 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fnh89"] Oct 02 11:55:33 crc kubenswrapper[4725]: I1002 11:55:33.278699 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90069d14-eefb-4a14-bc3a-1d553dd8cb85" path="/var/lib/kubelet/pods/90069d14-eefb-4a14-bc3a-1d553dd8cb85/volumes" Oct 02 11:55:33 crc kubenswrapper[4725]: I1002 11:55:33.606866 4725 scope.go:117] "RemoveContainer" containerID="d0d624d8482b2be82923c2adb0fbeebb9cc8420cf663515b8f9dc04e5e207e2e" Oct 02 11:55:37 crc kubenswrapper[4725]: I1002 11:55:37.037888 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7jtpb"] Oct 02 11:55:37 crc kubenswrapper[4725]: I1002 11:55:37.049524 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qskpf"] Oct 02 11:55:37 crc kubenswrapper[4725]: I1002 11:55:37.059797 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7jtpb"] Oct 02 11:55:37 crc kubenswrapper[4725]: I1002 11:55:37.067950 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qskpf"] Oct 02 11:55:37 crc kubenswrapper[4725]: I1002 11:55:37.286620 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="844997fc-7d91-4705-8364-46690b714963" path="/var/lib/kubelet/pods/844997fc-7d91-4705-8364-46690b714963/volumes" Oct 02 11:55:37 crc kubenswrapper[4725]: I1002 11:55:37.287950 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e2149e6-a89e-4464-aab7-4acc9b060ed3" path="/var/lib/kubelet/pods/9e2149e6-a89e-4464-aab7-4acc9b060ed3/volumes" Oct 02 11:55:42 crc kubenswrapper[4725]: I1002 11:55:42.031642 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1769-account-create-zwtgw"] Oct 02 11:55:42 crc kubenswrapper[4725]: I1002 11:55:42.039632 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1769-account-create-zwtgw"] Oct 02 11:55:43 crc kubenswrapper[4725]: I1002 11:55:43.283078 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="784ffc31-f99b-4f32-9af1-45fc14371051" path="/var/lib/kubelet/pods/784ffc31-f99b-4f32-9af1-45fc14371051/volumes" Oct 02 11:55:44 crc kubenswrapper[4725]: I1002 11:55:44.978659 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:55:44 crc kubenswrapper[4725]: I1002 11:55:44.979222 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:55:47 crc kubenswrapper[4725]: I1002 11:55:47.035155 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2adf-account-create-gdttl"] Oct 02 11:55:47 crc kubenswrapper[4725]: I1002 11:55:47.046088 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2cfd-account-create-cpr2m"] Oct 02 11:55:47 crc kubenswrapper[4725]: I1002 11:55:47.055324 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2adf-account-create-gdttl"] Oct 02 11:55:47 crc kubenswrapper[4725]: I1002 11:55:47.069884 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2cfd-account-create-cpr2m"] Oct 02 11:55:47 crc kubenswrapper[4725]: I1002 11:55:47.282411 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a69787ae-b716-4ed5-9a16-5bc50c5e0dc7" path="/var/lib/kubelet/pods/a69787ae-b716-4ed5-9a16-5bc50c5e0dc7/volumes" Oct 02 11:55:47 crc kubenswrapper[4725]: I1002 11:55:47.283807 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24b56cf-ce0c-4703-8a7e-559732d3912e" path="/var/lib/kubelet/pods/b24b56cf-ce0c-4703-8a7e-559732d3912e/volumes" Oct 02 11:56:00 crc kubenswrapper[4725]: I1002 11:56:00.062767 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4994j"] Oct 02 11:56:00 crc kubenswrapper[4725]: I1002 11:56:00.075896 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4994j"] Oct 02 11:56:00 crc kubenswrapper[4725]: I1002 11:56:00.086243 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-g5mdv"] Oct 02 11:56:00 crc kubenswrapper[4725]: I1002 11:56:00.095643 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-g5mdv"] Oct 02 11:56:00 crc kubenswrapper[4725]: I1002 11:56:00.104676 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dzstm"] Oct 02 11:56:00 crc kubenswrapper[4725]: I1002 11:56:00.113279 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dzstm"] Oct 02 11:56:01 crc kubenswrapper[4725]: I1002 11:56:01.284793 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c4bc8e-ed79-4ec9-81bc-862adbdf3f44" path="/var/lib/kubelet/pods/59c4bc8e-ed79-4ec9-81bc-862adbdf3f44/volumes" Oct 02 11:56:01 crc kubenswrapper[4725]: I1002 11:56:01.285628 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76616a88-ee14-40ad-98e3-70a436c064e3" path="/var/lib/kubelet/pods/76616a88-ee14-40ad-98e3-70a436c064e3/volumes" Oct 02 11:56:01 crc kubenswrapper[4725]: I1002 11:56:01.286143 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77c89bd4-a34a-46e2-9d31-13e0e657c6d3" path="/var/lib/kubelet/pods/77c89bd4-a34a-46e2-9d31-13e0e657c6d3/volumes" Oct 02 11:56:07 crc kubenswrapper[4725]: I1002 11:56:07.033393 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9gmkp"] Oct 02 11:56:07 crc kubenswrapper[4725]: I1002 11:56:07.045980 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9gmkp"] Oct 02 11:56:07 crc kubenswrapper[4725]: I1002 11:56:07.281911 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b90074e8-8d41-4fb8-98b8-4d202a69c345" path="/var/lib/kubelet/pods/b90074e8-8d41-4fb8-98b8-4d202a69c345/volumes" Oct 02 11:56:09 crc kubenswrapper[4725]: I1002 11:56:09.041530 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-kkjvm"] Oct 02 11:56:09 crc kubenswrapper[4725]: I1002 11:56:09.051135 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-kkjvm"] Oct 02 11:56:09 crc kubenswrapper[4725]: I1002 11:56:09.278668 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308dd5cf-d967-4ffc-821d-e94014a85ddd" path="/var/lib/kubelet/pods/308dd5cf-d967-4ffc-821d-e94014a85ddd/volumes" Oct 02 11:56:14 crc kubenswrapper[4725]: I1002 11:56:14.978166 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 11:56:14 crc kubenswrapper[4725]: I1002 11:56:14.979040 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 11:56:14 crc kubenswrapper[4725]: I1002 11:56:14.979110 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 11:56:14 crc kubenswrapper[4725]: I1002 11:56:14.980179 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 11:56:14 crc kubenswrapper[4725]: I1002 11:56:14.980260 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" gracePeriod=600 Oct 02 11:56:15 crc kubenswrapper[4725]: I1002 11:56:15.600597 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" exitCode=0 Oct 02 11:56:15 crc kubenswrapper[4725]: I1002 11:56:15.600704 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d"} Oct 02 11:56:15 crc kubenswrapper[4725]: I1002 11:56:15.600959 4725 scope.go:117] "RemoveContainer" containerID="a16429b8c29887ced1bf90fd0b8fd7b846f3af47c6a14a9d7929f1cccc3dad08" Oct 02 11:56:15 crc kubenswrapper[4725]: E1002 11:56:15.626042 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:56:16 crc kubenswrapper[4725]: I1002 11:56:16.618179 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:56:16 crc kubenswrapper[4725]: E1002 11:56:16.619432 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:56:21 crc kubenswrapper[4725]: I1002 11:56:21.668842 4725 generic.go:334] "Generic (PLEG): container finished" podID="05b5e1c6-efe9-4a6f-a623-c058ae2e301a" containerID="2167fc9b9479b9fd9bd0cf485ea01fb782fedbce03f31df1a47038d2bd8a6aad" exitCode=0 Oct 02 11:56:21 crc kubenswrapper[4725]: I1002 11:56:21.668896 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" event={"ID":"05b5e1c6-efe9-4a6f-a623-c058ae2e301a","Type":"ContainerDied","Data":"2167fc9b9479b9fd9bd0cf485ea01fb782fedbce03f31df1a47038d2bd8a6aad"} Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.159442 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.288169 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-ssh-key\") pod \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.288717 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-inventory\") pod \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.289045 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chw7m\" (UniqueName: \"kubernetes.io/projected/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-kube-api-access-chw7m\") pod \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\" (UID: \"05b5e1c6-efe9-4a6f-a623-c058ae2e301a\") " Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.294624 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-kube-api-access-chw7m" (OuterVolumeSpecName: "kube-api-access-chw7m") pod "05b5e1c6-efe9-4a6f-a623-c058ae2e301a" (UID: "05b5e1c6-efe9-4a6f-a623-c058ae2e301a"). InnerVolumeSpecName "kube-api-access-chw7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.316212 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-inventory" (OuterVolumeSpecName: "inventory") pod "05b5e1c6-efe9-4a6f-a623-c058ae2e301a" (UID: "05b5e1c6-efe9-4a6f-a623-c058ae2e301a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.317361 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "05b5e1c6-efe9-4a6f-a623-c058ae2e301a" (UID: "05b5e1c6-efe9-4a6f-a623-c058ae2e301a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.392350 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.392385 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chw7m\" (UniqueName: \"kubernetes.io/projected/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-kube-api-access-chw7m\") on node \"crc\" DevicePath \"\"" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.392419 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05b5e1c6-efe9-4a6f-a623-c058ae2e301a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.696997 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" event={"ID":"05b5e1c6-efe9-4a6f-a623-c058ae2e301a","Type":"ContainerDied","Data":"557367107b5ff1a373a60daa0f911c54441b9993c43af2055e8a768b143e3b0d"} Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.697067 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="557367107b5ff1a373a60daa0f911c54441b9993c43af2055e8a768b143e3b0d" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.697096 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.812885 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2"] Oct 02 11:56:23 crc kubenswrapper[4725]: E1002 11:56:23.813383 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b5e1c6-efe9-4a6f-a623-c058ae2e301a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.813399 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b5e1c6-efe9-4a6f-a623-c058ae2e301a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.813584 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b5e1c6-efe9-4a6f-a623-c058ae2e301a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.814304 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.818695 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.819045 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.819070 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.819209 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 11:56:23 crc kubenswrapper[4725]: I1002 11:56:23.824164 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2"] Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.003768 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.004275 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blkp2\" (UniqueName: \"kubernetes.io/projected/15f9480b-ec9f-48d1-9778-1376f2c1245e-kube-api-access-blkp2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.004324 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.105789 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.105912 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blkp2\" (UniqueName: \"kubernetes.io/projected/15f9480b-ec9f-48d1-9778-1376f2c1245e-kube-api-access-blkp2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.105940 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.111611 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.120883 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.123566 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blkp2\" (UniqueName: \"kubernetes.io/projected/15f9480b-ec9f-48d1-9778-1376f2c1245e-kube-api-access-blkp2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.140640 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.676043 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2"] Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.683937 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 11:56:24 crc kubenswrapper[4725]: I1002 11:56:24.705109 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" event={"ID":"15f9480b-ec9f-48d1-9778-1376f2c1245e","Type":"ContainerStarted","Data":"a510516975f780e200cac08f19dcb4265d86114ec34f561eb27457fa65351778"} Oct 02 11:56:26 crc kubenswrapper[4725]: I1002 11:56:26.730391 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" event={"ID":"15f9480b-ec9f-48d1-9778-1376f2c1245e","Type":"ContainerStarted","Data":"f68796ff9862d50ae3df44ef650ef9e9639b86b92226e2a54e627136305df769"} Oct 02 11:56:26 crc kubenswrapper[4725]: I1002 11:56:26.765689 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" podStartSLOduration=2.899380367 podStartE2EDuration="3.765657951s" podCreationTimestamp="2025-10-02 11:56:23 +0000 UTC" firstStartedPulling="2025-10-02 11:56:24.683660326 +0000 UTC m=+1704.591159789" lastFinishedPulling="2025-10-02 11:56:25.54993791 +0000 UTC m=+1705.457437373" observedRunningTime="2025-10-02 11:56:26.753034868 +0000 UTC m=+1706.660534361" watchObservedRunningTime="2025-10-02 11:56:26.765657951 +0000 UTC m=+1706.673157434" Oct 02 11:56:30 crc kubenswrapper[4725]: I1002 11:56:30.268214 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:56:30 crc kubenswrapper[4725]: E1002 11:56:30.268832 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:56:31 crc kubenswrapper[4725]: I1002 11:56:31.044909 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7382-account-create-kfv6s"] Oct 02 11:56:31 crc kubenswrapper[4725]: I1002 11:56:31.053843 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7382-account-create-kfv6s"] Oct 02 11:56:31 crc kubenswrapper[4725]: I1002 11:56:31.283968 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8b387f-3df2-4803-b785-99815cea430a" path="/var/lib/kubelet/pods/4a8b387f-3df2-4803-b785-99815cea430a/volumes" Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.035964 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a7dd-account-create-7lhl9"] Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.047694 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a7dd-account-create-7lhl9"] Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.286668 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d384f0dd-dd01-4a5c-89d2-7c5981ad434d" path="/var/lib/kubelet/pods/d384f0dd-dd01-4a5c-89d2-7c5981ad434d/volumes" Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.691195 4725 scope.go:117] "RemoveContainer" containerID="191dfccb66de35bd8f91802f9c60a6c76a2fc654e736b846da95d29b20d4b48b" Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.745089 4725 scope.go:117] "RemoveContainer" containerID="9ec4afa729ab9fcb6c23d28d960807526e3562e45fe992f8472fc03903a370fa" Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.775446 4725 scope.go:117] "RemoveContainer" containerID="c6986b71f14ebbd4af9f33592aee7688302b2447f72519436e1afd2716aab812" Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.823368 4725 scope.go:117] "RemoveContainer" containerID="35e485dbc50d40a461165d015bffcfa1cb39dd455c19a91aac3f984115663925" Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.869869 4725 scope.go:117] "RemoveContainer" containerID="01398f78e4c3d562764ad0e60ab00e4d79aca012db80f822050401eb00f6359d" Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.937127 4725 scope.go:117] "RemoveContainer" containerID="009f9875fc2c7b8074178d49cd0db0c965a6ca16e9bf395203ff32cdb203090e" Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.955437 4725 scope.go:117] "RemoveContainer" containerID="d87d4fdbbda1a5124e64b258ee1d73de2500ae3d4b6b0fa07f6420263e046936" Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.973151 4725 scope.go:117] "RemoveContainer" containerID="8eff4c84e3fdd2f0678dff12c75231c48957010fcf87a44196cfeaabfd8425e0" Oct 02 11:56:33 crc kubenswrapper[4725]: I1002 11:56:33.989990 4725 scope.go:117] "RemoveContainer" containerID="d4d5ae8b1418a8c9236c862c22bb2b3f602013365a79ec84e61eec06623e5fba" Oct 02 11:56:34 crc kubenswrapper[4725]: I1002 11:56:34.011081 4725 scope.go:117] "RemoveContainer" containerID="91226ef91f2cf0066ad144b00076b35cf57578980278c2a27e40787c583bbffa" Oct 02 11:56:34 crc kubenswrapper[4725]: I1002 11:56:34.041237 4725 scope.go:117] "RemoveContainer" containerID="20675bdf17716c9c9f01243837d6d98ffc919aefaf319744b4fcdd280d9cb10c" Oct 02 11:56:34 crc kubenswrapper[4725]: I1002 11:56:34.104164 4725 scope.go:117] "RemoveContainer" containerID="c00cc9fa5254b4700e1f165534a20384f99f4dce1b1cba27d73499c3c25c6c4f" Oct 02 11:56:35 crc kubenswrapper[4725]: I1002 11:56:35.032146 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-daa4-account-create-gbmhh"] Oct 02 11:56:35 crc kubenswrapper[4725]: I1002 11:56:35.045750 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-daa4-account-create-gbmhh"] Oct 02 11:56:35 crc kubenswrapper[4725]: I1002 11:56:35.285591 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480d8d51-eb78-4bce-8f31-0ff29e6ac822" path="/var/lib/kubelet/pods/480d8d51-eb78-4bce-8f31-0ff29e6ac822/volumes" Oct 02 11:56:42 crc kubenswrapper[4725]: I1002 11:56:42.034087 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zmk95"] Oct 02 11:56:42 crc kubenswrapper[4725]: I1002 11:56:42.041436 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zmk95"] Oct 02 11:56:42 crc kubenswrapper[4725]: I1002 11:56:42.267969 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:56:42 crc kubenswrapper[4725]: E1002 11:56:42.268314 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:56:43 crc kubenswrapper[4725]: I1002 11:56:43.278125 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588ebf73-67b1-4662-b4cf-d51123a49937" path="/var/lib/kubelet/pods/588ebf73-67b1-4662-b4cf-d51123a49937/volumes" Oct 02 11:56:56 crc kubenswrapper[4725]: I1002 11:56:56.267305 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:56:56 crc kubenswrapper[4725]: E1002 11:56:56.268040 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:56:59 crc kubenswrapper[4725]: I1002 11:56:59.047804 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kk9t6"] Oct 02 11:56:59 crc kubenswrapper[4725]: I1002 11:56:59.061100 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kk9t6"] Oct 02 11:56:59 crc kubenswrapper[4725]: I1002 11:56:59.289656 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff01db3-af7c-4b84-8258-482f19a0a330" path="/var/lib/kubelet/pods/dff01db3-af7c-4b84-8258-482f19a0a330/volumes" Oct 02 11:57:11 crc kubenswrapper[4725]: I1002 11:57:11.277240 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:57:11 crc kubenswrapper[4725]: E1002 11:57:11.277942 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:57:25 crc kubenswrapper[4725]: I1002 11:57:25.269214 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:57:25 crc kubenswrapper[4725]: E1002 11:57:25.270455 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:57:34 crc kubenswrapper[4725]: I1002 11:57:34.358781 4725 scope.go:117] "RemoveContainer" containerID="c9a356393159f5d7e114cb81d89851a23264f01e0461805e984aa6fa2426aef7" Oct 02 11:57:34 crc kubenswrapper[4725]: I1002 11:57:34.410420 4725 scope.go:117] "RemoveContainer" containerID="76705b9cfcbbaf736d157836bbe6b5a0e09f173a6acccc2bbc939d729ea5c812" Oct 02 11:57:34 crc kubenswrapper[4725]: I1002 11:57:34.438386 4725 scope.go:117] "RemoveContainer" containerID="12f031825f2811221fd0aaf7c85c72319c391f08732db59e3c5020d4ed16b7f9" Oct 02 11:57:38 crc kubenswrapper[4725]: I1002 11:57:38.064174 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-q2mkt"] Oct 02 11:57:38 crc kubenswrapper[4725]: I1002 11:57:38.070931 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-q2mkt"] Oct 02 11:57:38 crc kubenswrapper[4725]: I1002 11:57:38.267772 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:57:38 crc kubenswrapper[4725]: E1002 11:57:38.268009 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:57:39 crc kubenswrapper[4725]: I1002 11:57:39.279944 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050130d8-978e-40e2-9869-ffdbcf50da81" path="/var/lib/kubelet/pods/050130d8-978e-40e2-9869-ffdbcf50da81/volumes" Oct 02 11:57:39 crc kubenswrapper[4725]: I1002 11:57:39.524339 4725 generic.go:334] "Generic (PLEG): container finished" podID="15f9480b-ec9f-48d1-9778-1376f2c1245e" containerID="f68796ff9862d50ae3df44ef650ef9e9639b86b92226e2a54e627136305df769" exitCode=0 Oct 02 11:57:39 crc kubenswrapper[4725]: I1002 11:57:39.524453 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" event={"ID":"15f9480b-ec9f-48d1-9778-1376f2c1245e","Type":"ContainerDied","Data":"f68796ff9862d50ae3df44ef650ef9e9639b86b92226e2a54e627136305df769"} Oct 02 11:57:40 crc kubenswrapper[4725]: I1002 11:57:40.931755 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:57:40 crc kubenswrapper[4725]: I1002 11:57:40.968794 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-ssh-key\") pod \"15f9480b-ec9f-48d1-9778-1376f2c1245e\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " Oct 02 11:57:40 crc kubenswrapper[4725]: I1002 11:57:40.968836 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-inventory\") pod \"15f9480b-ec9f-48d1-9778-1376f2c1245e\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " Oct 02 11:57:40 crc kubenswrapper[4725]: I1002 11:57:40.968937 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blkp2\" (UniqueName: \"kubernetes.io/projected/15f9480b-ec9f-48d1-9778-1376f2c1245e-kube-api-access-blkp2\") pod \"15f9480b-ec9f-48d1-9778-1376f2c1245e\" (UID: \"15f9480b-ec9f-48d1-9778-1376f2c1245e\") " Oct 02 11:57:40 crc kubenswrapper[4725]: I1002 11:57:40.976498 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f9480b-ec9f-48d1-9778-1376f2c1245e-kube-api-access-blkp2" (OuterVolumeSpecName: "kube-api-access-blkp2") pod "15f9480b-ec9f-48d1-9778-1376f2c1245e" (UID: "15f9480b-ec9f-48d1-9778-1376f2c1245e"). InnerVolumeSpecName "kube-api-access-blkp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.002878 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-inventory" (OuterVolumeSpecName: "inventory") pod "15f9480b-ec9f-48d1-9778-1376f2c1245e" (UID: "15f9480b-ec9f-48d1-9778-1376f2c1245e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.003366 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "15f9480b-ec9f-48d1-9778-1376f2c1245e" (UID: "15f9480b-ec9f-48d1-9778-1376f2c1245e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.070695 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blkp2\" (UniqueName: \"kubernetes.io/projected/15f9480b-ec9f-48d1-9778-1376f2c1245e-kube-api-access-blkp2\") on node \"crc\" DevicePath \"\"" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.070759 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.070777 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/15f9480b-ec9f-48d1-9778-1376f2c1245e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.550158 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" event={"ID":"15f9480b-ec9f-48d1-9778-1376f2c1245e","Type":"ContainerDied","Data":"a510516975f780e200cac08f19dcb4265d86114ec34f561eb27457fa65351778"} Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.550218 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.550222 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a510516975f780e200cac08f19dcb4265d86114ec34f561eb27457fa65351778" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.612796 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls"] Oct 02 11:57:41 crc kubenswrapper[4725]: E1002 11:57:41.613276 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f9480b-ec9f-48d1-9778-1376f2c1245e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.613303 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f9480b-ec9f-48d1-9778-1376f2c1245e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.613583 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f9480b-ec9f-48d1-9778-1376f2c1245e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.614487 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.618192 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.618878 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.619047 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.619240 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.623847 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls"] Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.681316 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79t42\" (UniqueName: \"kubernetes.io/projected/b10901fd-a5d7-431f-a105-ff03a7554335-kube-api-access-79t42\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.681381 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.681494 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.783446 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.783660 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79t42\" (UniqueName: \"kubernetes.io/projected/b10901fd-a5d7-431f-a105-ff03a7554335-kube-api-access-79t42\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.783700 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.787611 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.791323 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.806107 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79t42\" (UniqueName: \"kubernetes.io/projected/b10901fd-a5d7-431f-a105-ff03a7554335-kube-api-access-79t42\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:41 crc kubenswrapper[4725]: I1002 11:57:41.934214 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:42 crc kubenswrapper[4725]: I1002 11:57:42.450510 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls"] Oct 02 11:57:42 crc kubenswrapper[4725]: I1002 11:57:42.560413 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" event={"ID":"b10901fd-a5d7-431f-a105-ff03a7554335","Type":"ContainerStarted","Data":"9af748b6c0ad8555317e2759ea41fb8fe61ee2aa2834d9f324e01ccb5ec1006f"} Oct 02 11:57:43 crc kubenswrapper[4725]: I1002 11:57:43.572320 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" event={"ID":"b10901fd-a5d7-431f-a105-ff03a7554335","Type":"ContainerStarted","Data":"38862e8f52893dd673afa68dcc38019b34eb74861e42874e4e3e690235c19bec"} Oct 02 11:57:43 crc kubenswrapper[4725]: I1002 11:57:43.599298 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" podStartSLOduration=2.194105474 podStartE2EDuration="2.599277033s" podCreationTimestamp="2025-10-02 11:57:41 +0000 UTC" firstStartedPulling="2025-10-02 11:57:42.457794964 +0000 UTC m=+1782.365294457" lastFinishedPulling="2025-10-02 11:57:42.862966553 +0000 UTC m=+1782.770466016" observedRunningTime="2025-10-02 11:57:43.595560183 +0000 UTC m=+1783.503059716" watchObservedRunningTime="2025-10-02 11:57:43.599277033 +0000 UTC m=+1783.506776496" Oct 02 11:57:47 crc kubenswrapper[4725]: I1002 11:57:47.610998 4725 generic.go:334] "Generic (PLEG): container finished" podID="b10901fd-a5d7-431f-a105-ff03a7554335" containerID="38862e8f52893dd673afa68dcc38019b34eb74861e42874e4e3e690235c19bec" exitCode=0 Oct 02 11:57:47 crc kubenswrapper[4725]: I1002 11:57:47.611116 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" event={"ID":"b10901fd-a5d7-431f-a105-ff03a7554335","Type":"ContainerDied","Data":"38862e8f52893dd673afa68dcc38019b34eb74861e42874e4e3e690235c19bec"} Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.070554 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.231099 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-inventory\") pod \"b10901fd-a5d7-431f-a105-ff03a7554335\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.231756 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79t42\" (UniqueName: \"kubernetes.io/projected/b10901fd-a5d7-431f-a105-ff03a7554335-kube-api-access-79t42\") pod \"b10901fd-a5d7-431f-a105-ff03a7554335\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.231906 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-ssh-key\") pod \"b10901fd-a5d7-431f-a105-ff03a7554335\" (UID: \"b10901fd-a5d7-431f-a105-ff03a7554335\") " Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.242975 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10901fd-a5d7-431f-a105-ff03a7554335-kube-api-access-79t42" (OuterVolumeSpecName: "kube-api-access-79t42") pod "b10901fd-a5d7-431f-a105-ff03a7554335" (UID: "b10901fd-a5d7-431f-a105-ff03a7554335"). InnerVolumeSpecName "kube-api-access-79t42". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.260603 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b10901fd-a5d7-431f-a105-ff03a7554335" (UID: "b10901fd-a5d7-431f-a105-ff03a7554335"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.281911 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-inventory" (OuterVolumeSpecName: "inventory") pod "b10901fd-a5d7-431f-a105-ff03a7554335" (UID: "b10901fd-a5d7-431f-a105-ff03a7554335"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.335551 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.335613 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b10901fd-a5d7-431f-a105-ff03a7554335-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.335626 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79t42\" (UniqueName: \"kubernetes.io/projected/b10901fd-a5d7-431f-a105-ff03a7554335-kube-api-access-79t42\") on node \"crc\" DevicePath \"\"" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.632685 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" event={"ID":"b10901fd-a5d7-431f-a105-ff03a7554335","Type":"ContainerDied","Data":"9af748b6c0ad8555317e2759ea41fb8fe61ee2aa2834d9f324e01ccb5ec1006f"} Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.632752 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af748b6c0ad8555317e2759ea41fb8fe61ee2aa2834d9f324e01ccb5ec1006f" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.632855 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.734142 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs"] Oct 02 11:57:49 crc kubenswrapper[4725]: E1002 11:57:49.734606 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10901fd-a5d7-431f-a105-ff03a7554335" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.734625 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10901fd-a5d7-431f-a105-ff03a7554335" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.734831 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10901fd-a5d7-431f-a105-ff03a7554335" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.735464 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.737779 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.738882 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.739177 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.739333 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.743941 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs"] Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.846493 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nq8fs\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.846583 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkkq5\" (UniqueName: \"kubernetes.io/projected/25f013e8-9c08-40f3-84d8-2ddcb5528f44-kube-api-access-pkkq5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nq8fs\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.848764 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nq8fs\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.950291 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nq8fs\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.950407 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkkq5\" (UniqueName: \"kubernetes.io/projected/25f013e8-9c08-40f3-84d8-2ddcb5528f44-kube-api-access-pkkq5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nq8fs\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.950506 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nq8fs\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.954144 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nq8fs\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.954518 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nq8fs\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:57:49 crc kubenswrapper[4725]: I1002 11:57:49.975713 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkkq5\" (UniqueName: \"kubernetes.io/projected/25f013e8-9c08-40f3-84d8-2ddcb5528f44-kube-api-access-pkkq5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nq8fs\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:57:50 crc kubenswrapper[4725]: I1002 11:57:50.071481 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:57:50 crc kubenswrapper[4725]: I1002 11:57:50.603162 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs"] Oct 02 11:57:50 crc kubenswrapper[4725]: I1002 11:57:50.641581 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" event={"ID":"25f013e8-9c08-40f3-84d8-2ddcb5528f44","Type":"ContainerStarted","Data":"db0b84496f533d2d76e16e3881228cb27b1d4cf8df38b42500e515f0f840ccdd"} Oct 02 11:57:51 crc kubenswrapper[4725]: I1002 11:57:51.035614 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-tw4z2"] Oct 02 11:57:51 crc kubenswrapper[4725]: I1002 11:57:51.043528 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-56t7s"] Oct 02 11:57:51 crc kubenswrapper[4725]: I1002 11:57:51.051523 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-tkqzt"] Oct 02 11:57:51 crc kubenswrapper[4725]: I1002 11:57:51.060310 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-tw4z2"] Oct 02 11:57:51 crc kubenswrapper[4725]: I1002 11:57:51.069734 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-56t7s"] Oct 02 11:57:51 crc kubenswrapper[4725]: I1002 11:57:51.077679 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-tkqzt"] Oct 02 11:57:51 crc kubenswrapper[4725]: I1002 11:57:51.276548 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:57:51 crc kubenswrapper[4725]: E1002 11:57:51.276808 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:57:51 crc kubenswrapper[4725]: I1002 11:57:51.279613 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="804fa613-f386-41a1-975e-835525211cb3" path="/var/lib/kubelet/pods/804fa613-f386-41a1-975e-835525211cb3/volumes" Oct 02 11:57:51 crc kubenswrapper[4725]: I1002 11:57:51.281299 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a932dc2-f106-4272-ad37-091b4b8ec1dc" path="/var/lib/kubelet/pods/9a932dc2-f106-4272-ad37-091b4b8ec1dc/volumes" Oct 02 11:57:51 crc kubenswrapper[4725]: I1002 11:57:51.283406 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7eefa9c-3d9f-46d7-a47a-9886fd2b120b" path="/var/lib/kubelet/pods/c7eefa9c-3d9f-46d7-a47a-9886fd2b120b/volumes" Oct 02 11:57:51 crc kubenswrapper[4725]: I1002 11:57:51.651021 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" event={"ID":"25f013e8-9c08-40f3-84d8-2ddcb5528f44","Type":"ContainerStarted","Data":"7dde812bfc2bd8b4392bd01ad7b38dc94ad817870f7089519d754b66d0fc53b7"} Oct 02 11:57:52 crc kubenswrapper[4725]: I1002 11:57:52.026860 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" podStartSLOduration=2.55999322 podStartE2EDuration="3.026841229s" podCreationTimestamp="2025-10-02 11:57:49 +0000 UTC" firstStartedPulling="2025-10-02 11:57:50.615645362 +0000 UTC m=+1790.523144825" lastFinishedPulling="2025-10-02 11:57:51.082493361 +0000 UTC m=+1790.989992834" observedRunningTime="2025-10-02 11:57:51.666003141 +0000 UTC m=+1791.573502614" watchObservedRunningTime="2025-10-02 11:57:52.026841229 +0000 UTC m=+1791.934340692" Oct 02 11:57:52 crc kubenswrapper[4725]: I1002 11:57:52.034201 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-b725j"] Oct 02 11:57:52 crc kubenswrapper[4725]: I1002 11:57:52.043967 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-b725j"] Oct 02 11:57:53 crc kubenswrapper[4725]: I1002 11:57:53.279369 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca7b1966-197e-4e08-a162-7a3dd7eab8ed" path="/var/lib/kubelet/pods/ca7b1966-197e-4e08-a162-7a3dd7eab8ed/volumes" Oct 02 11:57:57 crc kubenswrapper[4725]: I1002 11:57:57.041007 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a442-account-create-zrmtv"] Oct 02 11:57:57 crc kubenswrapper[4725]: I1002 11:57:57.055621 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-7jgcd"] Oct 02 11:57:57 crc kubenswrapper[4725]: I1002 11:57:57.062667 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a442-account-create-zrmtv"] Oct 02 11:57:57 crc kubenswrapper[4725]: I1002 11:57:57.069949 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-7jgcd"] Oct 02 11:57:57 crc kubenswrapper[4725]: I1002 11:57:57.288282 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9514d7f5-f9ab-4f77-9bea-5952912df791" path="/var/lib/kubelet/pods/9514d7f5-f9ab-4f77-9bea-5952912df791/volumes" Oct 02 11:57:57 crc kubenswrapper[4725]: I1002 11:57:57.292256 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3e740ae-3b77-4497-b730-6cbd4f960d84" path="/var/lib/kubelet/pods/e3e740ae-3b77-4497-b730-6cbd4f960d84/volumes" Oct 02 11:58:06 crc kubenswrapper[4725]: I1002 11:58:06.268533 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:58:06 crc kubenswrapper[4725]: E1002 11:58:06.278339 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:58:08 crc kubenswrapper[4725]: I1002 11:58:08.054857 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8bc6-account-create-5mhvj"] Oct 02 11:58:08 crc kubenswrapper[4725]: I1002 11:58:08.067313 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3b1c-account-create-hrzwx"] Oct 02 11:58:08 crc kubenswrapper[4725]: I1002 11:58:08.078247 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8bc6-account-create-5mhvj"] Oct 02 11:58:08 crc kubenswrapper[4725]: I1002 11:58:08.091038 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3b1c-account-create-hrzwx"] Oct 02 11:58:09 crc kubenswrapper[4725]: I1002 11:58:09.279756 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4313ed03-47ea-43a4-b854-634bfd153111" path="/var/lib/kubelet/pods/4313ed03-47ea-43a4-b854-634bfd153111/volumes" Oct 02 11:58:09 crc kubenswrapper[4725]: I1002 11:58:09.280377 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f115e8bf-e0c9-4c1d-90ac-a55224c8eef9" path="/var/lib/kubelet/pods/f115e8bf-e0c9-4c1d-90ac-a55224c8eef9/volumes" Oct 02 11:58:17 crc kubenswrapper[4725]: I1002 11:58:17.267806 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:58:17 crc kubenswrapper[4725]: E1002 11:58:17.268705 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:58:28 crc kubenswrapper[4725]: I1002 11:58:28.013896 4725 generic.go:334] "Generic (PLEG): container finished" podID="25f013e8-9c08-40f3-84d8-2ddcb5528f44" containerID="7dde812bfc2bd8b4392bd01ad7b38dc94ad817870f7089519d754b66d0fc53b7" exitCode=0 Oct 02 11:58:28 crc kubenswrapper[4725]: I1002 11:58:28.014039 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" event={"ID":"25f013e8-9c08-40f3-84d8-2ddcb5528f44","Type":"ContainerDied","Data":"7dde812bfc2bd8b4392bd01ad7b38dc94ad817870f7089519d754b66d0fc53b7"} Oct 02 11:58:29 crc kubenswrapper[4725]: I1002 11:58:29.488293 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:58:29 crc kubenswrapper[4725]: I1002 11:58:29.632373 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-ssh-key\") pod \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " Oct 02 11:58:29 crc kubenswrapper[4725]: I1002 11:58:29.632425 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-inventory\") pod \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " Oct 02 11:58:29 crc kubenswrapper[4725]: I1002 11:58:29.632705 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkkq5\" (UniqueName: \"kubernetes.io/projected/25f013e8-9c08-40f3-84d8-2ddcb5528f44-kube-api-access-pkkq5\") pod \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\" (UID: \"25f013e8-9c08-40f3-84d8-2ddcb5528f44\") " Oct 02 11:58:29 crc kubenswrapper[4725]: I1002 11:58:29.637976 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f013e8-9c08-40f3-84d8-2ddcb5528f44-kube-api-access-pkkq5" (OuterVolumeSpecName: "kube-api-access-pkkq5") pod "25f013e8-9c08-40f3-84d8-2ddcb5528f44" (UID: "25f013e8-9c08-40f3-84d8-2ddcb5528f44"). InnerVolumeSpecName "kube-api-access-pkkq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:58:29 crc kubenswrapper[4725]: I1002 11:58:29.660034 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-inventory" (OuterVolumeSpecName: "inventory") pod "25f013e8-9c08-40f3-84d8-2ddcb5528f44" (UID: "25f013e8-9c08-40f3-84d8-2ddcb5528f44"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:58:29 crc kubenswrapper[4725]: I1002 11:58:29.677291 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "25f013e8-9c08-40f3-84d8-2ddcb5528f44" (UID: "25f013e8-9c08-40f3-84d8-2ddcb5528f44"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:58:29 crc kubenswrapper[4725]: I1002 11:58:29.734833 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:58:29 crc kubenswrapper[4725]: I1002 11:58:29.734869 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25f013e8-9c08-40f3-84d8-2ddcb5528f44-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 11:58:29 crc kubenswrapper[4725]: I1002 11:58:29.734879 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkkq5\" (UniqueName: \"kubernetes.io/projected/25f013e8-9c08-40f3-84d8-2ddcb5528f44-kube-api-access-pkkq5\") on node \"crc\" DevicePath \"\"" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.053385 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" event={"ID":"25f013e8-9c08-40f3-84d8-2ddcb5528f44","Type":"ContainerDied","Data":"db0b84496f533d2d76e16e3881228cb27b1d4cf8df38b42500e515f0f840ccdd"} Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.053765 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db0b84496f533d2d76e16e3881228cb27b1d4cf8df38b42500e515f0f840ccdd" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.053877 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nq8fs" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.238943 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r"] Oct 02 11:58:30 crc kubenswrapper[4725]: E1002 11:58:30.239447 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f013e8-9c08-40f3-84d8-2ddcb5528f44" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.239468 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f013e8-9c08-40f3-84d8-2ddcb5528f44" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.239673 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="25f013e8-9c08-40f3-84d8-2ddcb5528f44" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.240457 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.242509 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.242515 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.243495 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.244492 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.246497 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r"] Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.357162 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2znm\" (UniqueName: \"kubernetes.io/projected/b1d44487-97d6-4e7d-856d-61aec07be83c-kube-api-access-x2znm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.357223 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.357258 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.458619 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2znm\" (UniqueName: \"kubernetes.io/projected/b1d44487-97d6-4e7d-856d-61aec07be83c-kube-api-access-x2znm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.458682 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.458704 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.464741 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.477983 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.485537 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2znm\" (UniqueName: \"kubernetes.io/projected/b1d44487-97d6-4e7d-856d-61aec07be83c-kube-api-access-x2znm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:58:30 crc kubenswrapper[4725]: I1002 11:58:30.557181 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:58:31 crc kubenswrapper[4725]: I1002 11:58:31.144879 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r"] Oct 02 11:58:32 crc kubenswrapper[4725]: I1002 11:58:32.079012 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" event={"ID":"b1d44487-97d6-4e7d-856d-61aec07be83c","Type":"ContainerStarted","Data":"1500f33868435ad204b19d77298c3c070d982f2e3d28e8345e636bef0e09761c"} Oct 02 11:58:32 crc kubenswrapper[4725]: I1002 11:58:32.081028 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" event={"ID":"b1d44487-97d6-4e7d-856d-61aec07be83c","Type":"ContainerStarted","Data":"c531fd2199810f849314461b99a01658a1ccb34ec94c12c6ff19bb1882671530"} Oct 02 11:58:32 crc kubenswrapper[4725]: I1002 11:58:32.100539 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" podStartSLOduration=1.556984015 podStartE2EDuration="2.100517849s" podCreationTimestamp="2025-10-02 11:58:30 +0000 UTC" firstStartedPulling="2025-10-02 11:58:31.154282393 +0000 UTC m=+1831.061781856" lastFinishedPulling="2025-10-02 11:58:31.697816227 +0000 UTC m=+1831.605315690" observedRunningTime="2025-10-02 11:58:32.09946008 +0000 UTC m=+1832.006959563" watchObservedRunningTime="2025-10-02 11:58:32.100517849 +0000 UTC m=+1832.008017332" Oct 02 11:58:32 crc kubenswrapper[4725]: I1002 11:58:32.268230 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:58:32 crc kubenswrapper[4725]: E1002 11:58:32.268788 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:58:34 crc kubenswrapper[4725]: I1002 11:58:34.599502 4725 scope.go:117] "RemoveContainer" containerID="bade04a920a243a4390c715c14c24939ac948e46a6368256b916dfce4a9a105f" Oct 02 11:58:34 crc kubenswrapper[4725]: I1002 11:58:34.620017 4725 scope.go:117] "RemoveContainer" containerID="39417191c64b57ee831b5a5df0fc5eac8e7f10e43d7060638987b01c3a7aad40" Oct 02 11:58:34 crc kubenswrapper[4725]: I1002 11:58:34.668456 4725 scope.go:117] "RemoveContainer" containerID="b7e9b1ee574a8b19eb2a80b250dab80ed08ca9f07c9e584acf0e8fc05d53ed65" Oct 02 11:58:34 crc kubenswrapper[4725]: I1002 11:58:34.732878 4725 scope.go:117] "RemoveContainer" containerID="938638e1841eb22d9d67b42574477b2499b5bafacd9b5e48ffd829af54925e56" Oct 02 11:58:34 crc kubenswrapper[4725]: I1002 11:58:34.773251 4725 scope.go:117] "RemoveContainer" containerID="ecca87986d1d6c49a7d5f56ca6d9c88ab4042ea69ae8bf49fdaeb8ec9a240921" Oct 02 11:58:34 crc kubenswrapper[4725]: I1002 11:58:34.833543 4725 scope.go:117] "RemoveContainer" containerID="9574abc912ac543f89ecab186ce697a96f728d5c8d2c4d788b691fc7f8b48058" Oct 02 11:58:34 crc kubenswrapper[4725]: I1002 11:58:34.860104 4725 scope.go:117] "RemoveContainer" containerID="d88388a4db264ce81276415aad8169cf189633a0b5ec52aaeed3c9be83072a71" Oct 02 11:58:34 crc kubenswrapper[4725]: I1002 11:58:34.892107 4725 scope.go:117] "RemoveContainer" containerID="a3346e71f4700662902bdc8bf01640483cfcd2d2057eb988366f95c656c5e8fb" Oct 02 11:58:34 crc kubenswrapper[4725]: I1002 11:58:34.925196 4725 scope.go:117] "RemoveContainer" containerID="ef2fed30b6e70305d56d698d900485db247542728b67bcddf31fac28275739d7" Oct 02 11:58:37 crc kubenswrapper[4725]: I1002 11:58:37.052002 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rr568"] Oct 02 11:58:37 crc kubenswrapper[4725]: I1002 11:58:37.061449 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rr568"] Oct 02 11:58:37 crc kubenswrapper[4725]: I1002 11:58:37.282071 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd75c12f-2fcf-485d-9bbe-e5d1564af0e0" path="/var/lib/kubelet/pods/cd75c12f-2fcf-485d-9bbe-e5d1564af0e0/volumes" Oct 02 11:58:44 crc kubenswrapper[4725]: I1002 11:58:44.268003 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:58:44 crc kubenswrapper[4725]: E1002 11:58:44.268994 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:58:54 crc kubenswrapper[4725]: I1002 11:58:54.041742 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9p69"] Oct 02 11:58:54 crc kubenswrapper[4725]: I1002 11:58:54.051788 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9p69"] Oct 02 11:58:55 crc kubenswrapper[4725]: I1002 11:58:55.284118 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92ee54c2-e42c-4233-ad3c-062d28b43fb5" path="/var/lib/kubelet/pods/92ee54c2-e42c-4233-ad3c-062d28b43fb5/volumes" Oct 02 11:58:58 crc kubenswrapper[4725]: I1002 11:58:58.268517 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:58:58 crc kubenswrapper[4725]: E1002 11:58:58.269444 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:59:12 crc kubenswrapper[4725]: I1002 11:59:12.268330 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:59:12 crc kubenswrapper[4725]: E1002 11:59:12.269342 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:59:14 crc kubenswrapper[4725]: I1002 11:59:14.040140 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z786"] Oct 02 11:59:14 crc kubenswrapper[4725]: I1002 11:59:14.047179 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z786"] Oct 02 11:59:15 crc kubenswrapper[4725]: I1002 11:59:15.282627 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a682c38-9f47-4cf9-9111-c709313b0a72" path="/var/lib/kubelet/pods/3a682c38-9f47-4cf9-9111-c709313b0a72/volumes" Oct 02 11:59:25 crc kubenswrapper[4725]: I1002 11:59:25.614095 4725 generic.go:334] "Generic (PLEG): container finished" podID="b1d44487-97d6-4e7d-856d-61aec07be83c" containerID="1500f33868435ad204b19d77298c3c070d982f2e3d28e8345e636bef0e09761c" exitCode=2 Oct 02 11:59:25 crc kubenswrapper[4725]: I1002 11:59:25.614178 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" event={"ID":"b1d44487-97d6-4e7d-856d-61aec07be83c","Type":"ContainerDied","Data":"1500f33868435ad204b19d77298c3c070d982f2e3d28e8345e636bef0e09761c"} Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.191670 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.241987 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-ssh-key\") pod \"b1d44487-97d6-4e7d-856d-61aec07be83c\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.242321 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2znm\" (UniqueName: \"kubernetes.io/projected/b1d44487-97d6-4e7d-856d-61aec07be83c-kube-api-access-x2znm\") pod \"b1d44487-97d6-4e7d-856d-61aec07be83c\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.242380 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-inventory\") pod \"b1d44487-97d6-4e7d-856d-61aec07be83c\" (UID: \"b1d44487-97d6-4e7d-856d-61aec07be83c\") " Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.249338 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d44487-97d6-4e7d-856d-61aec07be83c-kube-api-access-x2znm" (OuterVolumeSpecName: "kube-api-access-x2znm") pod "b1d44487-97d6-4e7d-856d-61aec07be83c" (UID: "b1d44487-97d6-4e7d-856d-61aec07be83c"). InnerVolumeSpecName "kube-api-access-x2znm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.269054 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:59:27 crc kubenswrapper[4725]: E1002 11:59:27.269518 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.292751 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b1d44487-97d6-4e7d-856d-61aec07be83c" (UID: "b1d44487-97d6-4e7d-856d-61aec07be83c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.297517 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-inventory" (OuterVolumeSpecName: "inventory") pod "b1d44487-97d6-4e7d-856d-61aec07be83c" (UID: "b1d44487-97d6-4e7d-856d-61aec07be83c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.344952 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2znm\" (UniqueName: \"kubernetes.io/projected/b1d44487-97d6-4e7d-856d-61aec07be83c-kube-api-access-x2znm\") on node \"crc\" DevicePath \"\"" Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.344988 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.345001 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1d44487-97d6-4e7d-856d-61aec07be83c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.643665 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" event={"ID":"b1d44487-97d6-4e7d-856d-61aec07be83c","Type":"ContainerDied","Data":"c531fd2199810f849314461b99a01658a1ccb34ec94c12c6ff19bb1882671530"} Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.644063 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c531fd2199810f849314461b99a01658a1ccb34ec94c12c6ff19bb1882671530" Oct 02 11:59:27 crc kubenswrapper[4725]: I1002 11:59:27.643878 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.027397 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b"] Oct 02 11:59:35 crc kubenswrapper[4725]: E1002 11:59:35.028341 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d44487-97d6-4e7d-856d-61aec07be83c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.028357 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d44487-97d6-4e7d-856d-61aec07be83c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.028565 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d44487-97d6-4e7d-856d-61aec07be83c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.029351 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.032272 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.033438 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.033843 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.034143 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.040196 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b"] Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.095943 4725 scope.go:117] "RemoveContainer" containerID="029ce029af7df613cfa3b5835a2f4542b522b3642a28da583eff90049c57de8f" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.139326 4725 scope.go:117] "RemoveContainer" containerID="4f604fa3fa5900f27cd1fe9880a11824e0372a39669b4637da6abf8f76643092" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.158905 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z874b\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.158992 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzgqw\" (UniqueName: \"kubernetes.io/projected/2864a400-a21a-4c43-b078-16fece86e8fb-kube-api-access-hzgqw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z874b\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.159077 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z874b\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.202230 4725 scope.go:117] "RemoveContainer" containerID="2b582f6638491081d26584705e354265af28a6267eede14c895dab78ccba9184" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.260525 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z874b\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.260600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzgqw\" (UniqueName: \"kubernetes.io/projected/2864a400-a21a-4c43-b078-16fece86e8fb-kube-api-access-hzgqw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z874b\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.260638 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z874b\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.273781 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z874b\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.273827 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z874b\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.276389 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzgqw\" (UniqueName: \"kubernetes.io/projected/2864a400-a21a-4c43-b078-16fece86e8fb-kube-api-access-hzgqw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z874b\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.358392 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 11:59:35 crc kubenswrapper[4725]: I1002 11:59:35.894161 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b"] Oct 02 11:59:36 crc kubenswrapper[4725]: I1002 11:59:36.737166 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" event={"ID":"2864a400-a21a-4c43-b078-16fece86e8fb","Type":"ContainerStarted","Data":"e34df759449afbc70d00ec35e77086c95d326ccc4581167d46dc0f57c6337d29"} Oct 02 11:59:36 crc kubenswrapper[4725]: I1002 11:59:36.737591 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" event={"ID":"2864a400-a21a-4c43-b078-16fece86e8fb","Type":"ContainerStarted","Data":"7bb770deca194a1bbae9737be65986d878373484780996b5c35c65d40d4f990d"} Oct 02 11:59:36 crc kubenswrapper[4725]: I1002 11:59:36.789901 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" podStartSLOduration=1.399500729 podStartE2EDuration="1.789868807s" podCreationTimestamp="2025-10-02 11:59:35 +0000 UTC" firstStartedPulling="2025-10-02 11:59:35.897403628 +0000 UTC m=+1895.804903091" lastFinishedPulling="2025-10-02 11:59:36.287771706 +0000 UTC m=+1896.195271169" observedRunningTime="2025-10-02 11:59:36.767163714 +0000 UTC m=+1896.674663177" watchObservedRunningTime="2025-10-02 11:59:36.789868807 +0000 UTC m=+1896.697368310" Oct 02 11:59:40 crc kubenswrapper[4725]: I1002 11:59:40.036898 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-29k72"] Oct 02 11:59:40 crc kubenswrapper[4725]: I1002 11:59:40.043160 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-29k72"] Oct 02 11:59:41 crc kubenswrapper[4725]: I1002 11:59:41.280921 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:59:41 crc kubenswrapper[4725]: E1002 11:59:41.281217 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 11:59:41 crc kubenswrapper[4725]: I1002 11:59:41.291383 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebe1b524-1eb2-4952-800a-9e3f70f7690e" path="/var/lib/kubelet/pods/ebe1b524-1eb2-4952-800a-9e3f70f7690e/volumes" Oct 02 11:59:55 crc kubenswrapper[4725]: I1002 11:59:55.267775 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 11:59:55 crc kubenswrapper[4725]: E1002 11:59:55.268523 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.153386 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7"] Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.155512 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.163680 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.164917 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.187526 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7"] Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.285930 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-secret-volume\") pod \"collect-profiles-29323440-tcvz7\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.286311 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvh4\" (UniqueName: \"kubernetes.io/projected/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-kube-api-access-9nvh4\") pod \"collect-profiles-29323440-tcvz7\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.286431 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-config-volume\") pod \"collect-profiles-29323440-tcvz7\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.388104 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-secret-volume\") pod \"collect-profiles-29323440-tcvz7\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.388274 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvh4\" (UniqueName: \"kubernetes.io/projected/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-kube-api-access-9nvh4\") pod \"collect-profiles-29323440-tcvz7\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.388341 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-config-volume\") pod \"collect-profiles-29323440-tcvz7\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.389880 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-config-volume\") pod \"collect-profiles-29323440-tcvz7\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.395928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-secret-volume\") pod \"collect-profiles-29323440-tcvz7\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.411743 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvh4\" (UniqueName: \"kubernetes.io/projected/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-kube-api-access-9nvh4\") pod \"collect-profiles-29323440-tcvz7\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.499501 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:00 crc kubenswrapper[4725]: I1002 12:00:00.997613 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7"] Oct 02 12:00:02 crc kubenswrapper[4725]: I1002 12:00:02.021576 4725 generic.go:334] "Generic (PLEG): container finished" podID="9b48ddba-c627-4fa9-bcf5-e6e39185bfc4" containerID="7b136c1bacc670bf1f81c9b92bfb145f6cd657dc70a3d6e9caa27eb8d53d988a" exitCode=0 Oct 02 12:00:02 crc kubenswrapper[4725]: I1002 12:00:02.021992 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" event={"ID":"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4","Type":"ContainerDied","Data":"7b136c1bacc670bf1f81c9b92bfb145f6cd657dc70a3d6e9caa27eb8d53d988a"} Oct 02 12:00:02 crc kubenswrapper[4725]: I1002 12:00:02.022025 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" event={"ID":"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4","Type":"ContainerStarted","Data":"6b6cd01fbc30733d520215a71181ddb308388347b8e97be19b5adfeb0b3d26df"} Oct 02 12:00:03 crc kubenswrapper[4725]: I1002 12:00:03.452617 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:03 crc kubenswrapper[4725]: I1002 12:00:03.553949 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-config-volume\") pod \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " Oct 02 12:00:03 crc kubenswrapper[4725]: I1002 12:00:03.554203 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-secret-volume\") pod \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " Oct 02 12:00:03 crc kubenswrapper[4725]: I1002 12:00:03.554255 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvh4\" (UniqueName: \"kubernetes.io/projected/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-kube-api-access-9nvh4\") pod \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\" (UID: \"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4\") " Oct 02 12:00:03 crc kubenswrapper[4725]: I1002 12:00:03.554920 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b48ddba-c627-4fa9-bcf5-e6e39185bfc4" (UID: "9b48ddba-c627-4fa9-bcf5-e6e39185bfc4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 12:00:03 crc kubenswrapper[4725]: I1002 12:00:03.561410 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b48ddba-c627-4fa9-bcf5-e6e39185bfc4" (UID: "9b48ddba-c627-4fa9-bcf5-e6e39185bfc4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:00:03 crc kubenswrapper[4725]: I1002 12:00:03.561599 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-kube-api-access-9nvh4" (OuterVolumeSpecName: "kube-api-access-9nvh4") pod "9b48ddba-c627-4fa9-bcf5-e6e39185bfc4" (UID: "9b48ddba-c627-4fa9-bcf5-e6e39185bfc4"). InnerVolumeSpecName "kube-api-access-9nvh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:00:03 crc kubenswrapper[4725]: I1002 12:00:03.656260 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:03 crc kubenswrapper[4725]: I1002 12:00:03.656299 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nvh4\" (UniqueName: \"kubernetes.io/projected/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-kube-api-access-9nvh4\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:03 crc kubenswrapper[4725]: I1002 12:00:03.656311 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b48ddba-c627-4fa9-bcf5-e6e39185bfc4-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:04 crc kubenswrapper[4725]: I1002 12:00:04.058882 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" event={"ID":"9b48ddba-c627-4fa9-bcf5-e6e39185bfc4","Type":"ContainerDied","Data":"6b6cd01fbc30733d520215a71181ddb308388347b8e97be19b5adfeb0b3d26df"} Oct 02 12:00:04 crc kubenswrapper[4725]: I1002 12:00:04.059174 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b6cd01fbc30733d520215a71181ddb308388347b8e97be19b5adfeb0b3d26df" Oct 02 12:00:04 crc kubenswrapper[4725]: I1002 12:00:04.059246 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323440-tcvz7" Oct 02 12:00:10 crc kubenswrapper[4725]: I1002 12:00:10.269420 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 12:00:10 crc kubenswrapper[4725]: E1002 12:00:10.270861 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:00:21 crc kubenswrapper[4725]: I1002 12:00:21.283305 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 12:00:21 crc kubenswrapper[4725]: E1002 12:00:21.284508 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:00:23 crc kubenswrapper[4725]: I1002 12:00:23.254639 4725 generic.go:334] "Generic (PLEG): container finished" podID="2864a400-a21a-4c43-b078-16fece86e8fb" containerID="e34df759449afbc70d00ec35e77086c95d326ccc4581167d46dc0f57c6337d29" exitCode=0 Oct 02 12:00:23 crc kubenswrapper[4725]: I1002 12:00:23.254799 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" event={"ID":"2864a400-a21a-4c43-b078-16fece86e8fb","Type":"ContainerDied","Data":"e34df759449afbc70d00ec35e77086c95d326ccc4581167d46dc0f57c6337d29"} Oct 02 12:00:24 crc kubenswrapper[4725]: I1002 12:00:24.660501 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 12:00:24 crc kubenswrapper[4725]: I1002 12:00:24.778463 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-inventory\") pod \"2864a400-a21a-4c43-b078-16fece86e8fb\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " Oct 02 12:00:24 crc kubenswrapper[4725]: I1002 12:00:24.778625 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-ssh-key\") pod \"2864a400-a21a-4c43-b078-16fece86e8fb\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " Oct 02 12:00:24 crc kubenswrapper[4725]: I1002 12:00:24.778662 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzgqw\" (UniqueName: \"kubernetes.io/projected/2864a400-a21a-4c43-b078-16fece86e8fb-kube-api-access-hzgqw\") pod \"2864a400-a21a-4c43-b078-16fece86e8fb\" (UID: \"2864a400-a21a-4c43-b078-16fece86e8fb\") " Oct 02 12:00:24 crc kubenswrapper[4725]: I1002 12:00:24.789431 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2864a400-a21a-4c43-b078-16fece86e8fb-kube-api-access-hzgqw" (OuterVolumeSpecName: "kube-api-access-hzgqw") pod "2864a400-a21a-4c43-b078-16fece86e8fb" (UID: "2864a400-a21a-4c43-b078-16fece86e8fb"). InnerVolumeSpecName "kube-api-access-hzgqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:00:24 crc kubenswrapper[4725]: I1002 12:00:24.847373 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-inventory" (OuterVolumeSpecName: "inventory") pod "2864a400-a21a-4c43-b078-16fece86e8fb" (UID: "2864a400-a21a-4c43-b078-16fece86e8fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:00:24 crc kubenswrapper[4725]: I1002 12:00:24.851042 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2864a400-a21a-4c43-b078-16fece86e8fb" (UID: "2864a400-a21a-4c43-b078-16fece86e8fb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:00:24 crc kubenswrapper[4725]: I1002 12:00:24.880899 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:24 crc kubenswrapper[4725]: I1002 12:00:24.880927 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2864a400-a21a-4c43-b078-16fece86e8fb-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:24 crc kubenswrapper[4725]: I1002 12:00:24.880937 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzgqw\" (UniqueName: \"kubernetes.io/projected/2864a400-a21a-4c43-b078-16fece86e8fb-kube-api-access-hzgqw\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.275291 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.279890 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z874b" event={"ID":"2864a400-a21a-4c43-b078-16fece86e8fb","Type":"ContainerDied","Data":"7bb770deca194a1bbae9737be65986d878373484780996b5c35c65d40d4f990d"} Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.279935 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bb770deca194a1bbae9737be65986d878373484780996b5c35c65d40d4f990d" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.379545 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9kgwd"] Oct 02 12:00:25 crc kubenswrapper[4725]: E1002 12:00:25.379935 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b48ddba-c627-4fa9-bcf5-e6e39185bfc4" containerName="collect-profiles" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.379950 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b48ddba-c627-4fa9-bcf5-e6e39185bfc4" containerName="collect-profiles" Oct 02 12:00:25 crc kubenswrapper[4725]: E1002 12:00:25.379970 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2864a400-a21a-4c43-b078-16fece86e8fb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.379978 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2864a400-a21a-4c43-b078-16fece86e8fb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.380175 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="2864a400-a21a-4c43-b078-16fece86e8fb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.380195 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b48ddba-c627-4fa9-bcf5-e6e39185bfc4" containerName="collect-profiles" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.380832 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.383613 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.383668 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.385643 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.394102 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.395386 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9kgwd"] Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.491106 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9kgwd\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.491303 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c2l7\" (UniqueName: \"kubernetes.io/projected/3377a61c-191d-4a8c-a342-9556746ea6e0-kube-api-access-8c2l7\") pod \"ssh-known-hosts-edpm-deployment-9kgwd\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.491944 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9kgwd\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.593916 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9kgwd\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.593992 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9kgwd\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.594058 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c2l7\" (UniqueName: \"kubernetes.io/projected/3377a61c-191d-4a8c-a342-9556746ea6e0-kube-api-access-8c2l7\") pod \"ssh-known-hosts-edpm-deployment-9kgwd\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.598355 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9kgwd\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.611302 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c2l7\" (UniqueName: \"kubernetes.io/projected/3377a61c-191d-4a8c-a342-9556746ea6e0-kube-api-access-8c2l7\") pod \"ssh-known-hosts-edpm-deployment-9kgwd\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.614223 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9kgwd\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:25 crc kubenswrapper[4725]: I1002 12:00:25.705532 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:26 crc kubenswrapper[4725]: I1002 12:00:26.256958 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9kgwd"] Oct 02 12:00:26 crc kubenswrapper[4725]: W1002 12:00:26.258820 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3377a61c_191d_4a8c_a342_9556746ea6e0.slice/crio-890d9aec8c8efaf314000ee364f473dced9e229b1c5cb21b2d297ab48e7254c4 WatchSource:0}: Error finding container 890d9aec8c8efaf314000ee364f473dced9e229b1c5cb21b2d297ab48e7254c4: Status 404 returned error can't find the container with id 890d9aec8c8efaf314000ee364f473dced9e229b1c5cb21b2d297ab48e7254c4 Oct 02 12:00:26 crc kubenswrapper[4725]: I1002 12:00:26.293631 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" event={"ID":"3377a61c-191d-4a8c-a342-9556746ea6e0","Type":"ContainerStarted","Data":"890d9aec8c8efaf314000ee364f473dced9e229b1c5cb21b2d297ab48e7254c4"} Oct 02 12:00:27 crc kubenswrapper[4725]: I1002 12:00:27.308583 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" event={"ID":"3377a61c-191d-4a8c-a342-9556746ea6e0","Type":"ContainerStarted","Data":"c91a0809863feb6507be389579fe63f8eaaa76909c9194f0f1403cc1aaa351e0"} Oct 02 12:00:27 crc kubenswrapper[4725]: I1002 12:00:27.332587 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" podStartSLOduration=1.798027638 podStartE2EDuration="2.332559143s" podCreationTimestamp="2025-10-02 12:00:25 +0000 UTC" firstStartedPulling="2025-10-02 12:00:26.261907042 +0000 UTC m=+1946.169406545" lastFinishedPulling="2025-10-02 12:00:26.796438547 +0000 UTC m=+1946.703938050" observedRunningTime="2025-10-02 12:00:27.325585605 +0000 UTC m=+1947.233085138" watchObservedRunningTime="2025-10-02 12:00:27.332559143 +0000 UTC m=+1947.240058646" Oct 02 12:00:32 crc kubenswrapper[4725]: I1002 12:00:32.268194 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 12:00:32 crc kubenswrapper[4725]: E1002 12:00:32.269153 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:00:34 crc kubenswrapper[4725]: I1002 12:00:34.373415 4725 generic.go:334] "Generic (PLEG): container finished" podID="3377a61c-191d-4a8c-a342-9556746ea6e0" containerID="c91a0809863feb6507be389579fe63f8eaaa76909c9194f0f1403cc1aaa351e0" exitCode=0 Oct 02 12:00:34 crc kubenswrapper[4725]: I1002 12:00:34.373493 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" event={"ID":"3377a61c-191d-4a8c-a342-9556746ea6e0","Type":"ContainerDied","Data":"c91a0809863feb6507be389579fe63f8eaaa76909c9194f0f1403cc1aaa351e0"} Oct 02 12:00:35 crc kubenswrapper[4725]: I1002 12:00:35.293638 4725 scope.go:117] "RemoveContainer" containerID="0104793470e16053cf855d90c43de6c2710ea42316fca074ab9a281be5a2fae7" Oct 02 12:00:35 crc kubenswrapper[4725]: I1002 12:00:35.760977 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:35 crc kubenswrapper[4725]: I1002 12:00:35.900687 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c2l7\" (UniqueName: \"kubernetes.io/projected/3377a61c-191d-4a8c-a342-9556746ea6e0-kube-api-access-8c2l7\") pod \"3377a61c-191d-4a8c-a342-9556746ea6e0\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " Oct 02 12:00:35 crc kubenswrapper[4725]: I1002 12:00:35.900759 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-inventory-0\") pod \"3377a61c-191d-4a8c-a342-9556746ea6e0\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " Oct 02 12:00:35 crc kubenswrapper[4725]: I1002 12:00:35.900859 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-ssh-key-openstack-edpm-ipam\") pod \"3377a61c-191d-4a8c-a342-9556746ea6e0\" (UID: \"3377a61c-191d-4a8c-a342-9556746ea6e0\") " Oct 02 12:00:35 crc kubenswrapper[4725]: I1002 12:00:35.908960 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3377a61c-191d-4a8c-a342-9556746ea6e0-kube-api-access-8c2l7" (OuterVolumeSpecName: "kube-api-access-8c2l7") pod "3377a61c-191d-4a8c-a342-9556746ea6e0" (UID: "3377a61c-191d-4a8c-a342-9556746ea6e0"). InnerVolumeSpecName "kube-api-access-8c2l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:00:35 crc kubenswrapper[4725]: I1002 12:00:35.930823 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "3377a61c-191d-4a8c-a342-9556746ea6e0" (UID: "3377a61c-191d-4a8c-a342-9556746ea6e0"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:00:35 crc kubenswrapper[4725]: I1002 12:00:35.933428 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3377a61c-191d-4a8c-a342-9556746ea6e0" (UID: "3377a61c-191d-4a8c-a342-9556746ea6e0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.002735 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c2l7\" (UniqueName: \"kubernetes.io/projected/3377a61c-191d-4a8c-a342-9556746ea6e0-kube-api-access-8c2l7\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.002777 4725 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.002790 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3377a61c-191d-4a8c-a342-9556746ea6e0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.394945 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" event={"ID":"3377a61c-191d-4a8c-a342-9556746ea6e0","Type":"ContainerDied","Data":"890d9aec8c8efaf314000ee364f473dced9e229b1c5cb21b2d297ab48e7254c4"} Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.395005 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="890d9aec8c8efaf314000ee364f473dced9e229b1c5cb21b2d297ab48e7254c4" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.395104 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9kgwd" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.470679 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr"] Oct 02 12:00:36 crc kubenswrapper[4725]: E1002 12:00:36.471267 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3377a61c-191d-4a8c-a342-9556746ea6e0" containerName="ssh-known-hosts-edpm-deployment" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.471293 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3377a61c-191d-4a8c-a342-9556746ea6e0" containerName="ssh-known-hosts-edpm-deployment" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.471517 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3377a61c-191d-4a8c-a342-9556746ea6e0" containerName="ssh-known-hosts-edpm-deployment" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.472293 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.477110 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.477161 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.477239 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.485189 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.490280 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr"] Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.624928 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-97mpr\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.624990 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh69c\" (UniqueName: \"kubernetes.io/projected/058f1e15-ad8b-4b61-a8e9-f98422ba2151-kube-api-access-lh69c\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-97mpr\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.625275 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-97mpr\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.727112 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-97mpr\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.727254 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-97mpr\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.727290 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh69c\" (UniqueName: \"kubernetes.io/projected/058f1e15-ad8b-4b61-a8e9-f98422ba2151-kube-api-access-lh69c\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-97mpr\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.731482 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-97mpr\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.732208 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-97mpr\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.744373 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh69c\" (UniqueName: \"kubernetes.io/projected/058f1e15-ad8b-4b61-a8e9-f98422ba2151-kube-api-access-lh69c\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-97mpr\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:36 crc kubenswrapper[4725]: I1002 12:00:36.798315 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.052371 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mlvfv"] Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.054921 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.069824 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlvfv"] Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.235976 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-catalog-content\") pod \"redhat-marketplace-mlvfv\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.236064 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-utilities\") pod \"redhat-marketplace-mlvfv\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.236574 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc7fl\" (UniqueName: \"kubernetes.io/projected/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-kube-api-access-mc7fl\") pod \"redhat-marketplace-mlvfv\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.338856 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-utilities\") pod \"redhat-marketplace-mlvfv\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.338987 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc7fl\" (UniqueName: \"kubernetes.io/projected/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-kube-api-access-mc7fl\") pod \"redhat-marketplace-mlvfv\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.339103 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-catalog-content\") pod \"redhat-marketplace-mlvfv\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.339744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-catalog-content\") pod \"redhat-marketplace-mlvfv\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.340016 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-utilities\") pod \"redhat-marketplace-mlvfv\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.348559 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr"] Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.362652 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc7fl\" (UniqueName: \"kubernetes.io/projected/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-kube-api-access-mc7fl\") pod \"redhat-marketplace-mlvfv\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.391258 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.416983 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" event={"ID":"058f1e15-ad8b-4b61-a8e9-f98422ba2151","Type":"ContainerStarted","Data":"c9fb212154ef03f7513a5a7645a2170a65daeefbbd3875bf7173642e510235ea"} Oct 02 12:00:37 crc kubenswrapper[4725]: I1002 12:00:37.978146 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlvfv"] Oct 02 12:00:38 crc kubenswrapper[4725]: I1002 12:00:38.427622 4725 generic.go:334] "Generic (PLEG): container finished" podID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" containerID="1d605ce94c406596e03d79d9809c9189bf1ca23644f083ac5d0fbee18f61160b" exitCode=0 Oct 02 12:00:38 crc kubenswrapper[4725]: I1002 12:00:38.427755 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlvfv" event={"ID":"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7","Type":"ContainerDied","Data":"1d605ce94c406596e03d79d9809c9189bf1ca23644f083ac5d0fbee18f61160b"} Oct 02 12:00:38 crc kubenswrapper[4725]: I1002 12:00:38.428058 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlvfv" event={"ID":"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7","Type":"ContainerStarted","Data":"dbf1066309c1223d5aaacf35a4a6024a1097d6cd28cefb7a506857447bfe5d99"} Oct 02 12:00:38 crc kubenswrapper[4725]: I1002 12:00:38.433834 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" event={"ID":"058f1e15-ad8b-4b61-a8e9-f98422ba2151","Type":"ContainerStarted","Data":"e8574482c050ac44a6e31bf3e1d10d7f2600ac87049ee11ff38d333063cbd4f9"} Oct 02 12:00:38 crc kubenswrapper[4725]: I1002 12:00:38.472514 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" podStartSLOduration=1.965199336 podStartE2EDuration="2.472495835s" podCreationTimestamp="2025-10-02 12:00:36 +0000 UTC" firstStartedPulling="2025-10-02 12:00:37.359962044 +0000 UTC m=+1957.267461517" lastFinishedPulling="2025-10-02 12:00:37.867258553 +0000 UTC m=+1957.774758016" observedRunningTime="2025-10-02 12:00:38.465434934 +0000 UTC m=+1958.372934407" watchObservedRunningTime="2025-10-02 12:00:38.472495835 +0000 UTC m=+1958.379995298" Oct 02 12:00:40 crc kubenswrapper[4725]: I1002 12:00:40.453100 4725 generic.go:334] "Generic (PLEG): container finished" podID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" containerID="ba1a3e74cd9c3cc46cbbe9a88505b10f872827c5ddb6614598fd1378d6b80e3e" exitCode=0 Oct 02 12:00:40 crc kubenswrapper[4725]: I1002 12:00:40.453158 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlvfv" event={"ID":"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7","Type":"ContainerDied","Data":"ba1a3e74cd9c3cc46cbbe9a88505b10f872827c5ddb6614598fd1378d6b80e3e"} Oct 02 12:00:41 crc kubenswrapper[4725]: I1002 12:00:41.477235 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlvfv" event={"ID":"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7","Type":"ContainerStarted","Data":"78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6"} Oct 02 12:00:41 crc kubenswrapper[4725]: I1002 12:00:41.501099 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mlvfv" podStartSLOduration=1.740959888 podStartE2EDuration="4.501080819s" podCreationTimestamp="2025-10-02 12:00:37 +0000 UTC" firstStartedPulling="2025-10-02 12:00:38.429495394 +0000 UTC m=+1958.336994857" lastFinishedPulling="2025-10-02 12:00:41.189616285 +0000 UTC m=+1961.097115788" observedRunningTime="2025-10-02 12:00:41.494991895 +0000 UTC m=+1961.402491358" watchObservedRunningTime="2025-10-02 12:00:41.501080819 +0000 UTC m=+1961.408580282" Oct 02 12:00:46 crc kubenswrapper[4725]: I1002 12:00:46.268459 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 12:00:46 crc kubenswrapper[4725]: E1002 12:00:46.269223 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:00:46 crc kubenswrapper[4725]: I1002 12:00:46.523504 4725 generic.go:334] "Generic (PLEG): container finished" podID="058f1e15-ad8b-4b61-a8e9-f98422ba2151" containerID="e8574482c050ac44a6e31bf3e1d10d7f2600ac87049ee11ff38d333063cbd4f9" exitCode=0 Oct 02 12:00:46 crc kubenswrapper[4725]: I1002 12:00:46.523543 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" event={"ID":"058f1e15-ad8b-4b61-a8e9-f98422ba2151","Type":"ContainerDied","Data":"e8574482c050ac44a6e31bf3e1d10d7f2600ac87049ee11ff38d333063cbd4f9"} Oct 02 12:00:47 crc kubenswrapper[4725]: I1002 12:00:47.391936 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:47 crc kubenswrapper[4725]: I1002 12:00:47.391992 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:47 crc kubenswrapper[4725]: I1002 12:00:47.451215 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:47 crc kubenswrapper[4725]: I1002 12:00:47.597926 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:47 crc kubenswrapper[4725]: I1002 12:00:47.693759 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlvfv"] Oct 02 12:00:47 crc kubenswrapper[4725]: I1002 12:00:47.991184 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.085626 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-ssh-key\") pod \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.085674 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-inventory\") pod \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.085711 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh69c\" (UniqueName: \"kubernetes.io/projected/058f1e15-ad8b-4b61-a8e9-f98422ba2151-kube-api-access-lh69c\") pod \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\" (UID: \"058f1e15-ad8b-4b61-a8e9-f98422ba2151\") " Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.103579 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058f1e15-ad8b-4b61-a8e9-f98422ba2151-kube-api-access-lh69c" (OuterVolumeSpecName: "kube-api-access-lh69c") pod "058f1e15-ad8b-4b61-a8e9-f98422ba2151" (UID: "058f1e15-ad8b-4b61-a8e9-f98422ba2151"). InnerVolumeSpecName "kube-api-access-lh69c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.120948 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "058f1e15-ad8b-4b61-a8e9-f98422ba2151" (UID: "058f1e15-ad8b-4b61-a8e9-f98422ba2151"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.137182 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-inventory" (OuterVolumeSpecName: "inventory") pod "058f1e15-ad8b-4b61-a8e9-f98422ba2151" (UID: "058f1e15-ad8b-4b61-a8e9-f98422ba2151"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.187367 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.187400 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/058f1e15-ad8b-4b61-a8e9-f98422ba2151-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.187410 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh69c\" (UniqueName: \"kubernetes.io/projected/058f1e15-ad8b-4b61-a8e9-f98422ba2151-kube-api-access-lh69c\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.548960 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.549198 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-97mpr" event={"ID":"058f1e15-ad8b-4b61-a8e9-f98422ba2151","Type":"ContainerDied","Data":"c9fb212154ef03f7513a5a7645a2170a65daeefbbd3875bf7173642e510235ea"} Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.552958 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9fb212154ef03f7513a5a7645a2170a65daeefbbd3875bf7173642e510235ea" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.616762 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk"] Oct 02 12:00:48 crc kubenswrapper[4725]: E1002 12:00:48.617167 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058f1e15-ad8b-4b61-a8e9-f98422ba2151" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.617184 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="058f1e15-ad8b-4b61-a8e9-f98422ba2151" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.617343 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="058f1e15-ad8b-4b61-a8e9-f98422ba2151" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.617985 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.623206 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.623442 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.624127 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.624507 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.631159 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk"] Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.696836 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-472wk\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.696915 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqspx\" (UniqueName: \"kubernetes.io/projected/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-kube-api-access-zqspx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-472wk\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.696946 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-472wk\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.798348 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-472wk\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.798471 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqspx\" (UniqueName: \"kubernetes.io/projected/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-kube-api-access-zqspx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-472wk\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.798510 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-472wk\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.803498 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-472wk\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.805919 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-472wk\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.820169 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqspx\" (UniqueName: \"kubernetes.io/projected/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-kube-api-access-zqspx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-472wk\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:00:48 crc kubenswrapper[4725]: I1002 12:00:48.953304 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:00:49 crc kubenswrapper[4725]: I1002 12:00:49.497411 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk"] Oct 02 12:00:49 crc kubenswrapper[4725]: I1002 12:00:49.559194 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" event={"ID":"e04f0ae5-20a3-47c1-a877-d717d8d7feb8","Type":"ContainerStarted","Data":"f9dcfc7571ece28bfa2103d3fe2813188ed4565d7ca696ec015b7078686ffbdc"} Oct 02 12:00:49 crc kubenswrapper[4725]: I1002 12:00:49.559534 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mlvfv" podUID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" containerName="registry-server" containerID="cri-o://78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6" gracePeriod=2 Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.098002 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.233351 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-utilities\") pod \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.233487 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc7fl\" (UniqueName: \"kubernetes.io/projected/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-kube-api-access-mc7fl\") pod \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.233562 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-catalog-content\") pod \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\" (UID: \"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7\") " Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.234399 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-utilities" (OuterVolumeSpecName: "utilities") pod "1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" (UID: "1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.245421 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" (UID: "1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.255092 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-kube-api-access-mc7fl" (OuterVolumeSpecName: "kube-api-access-mc7fl") pod "1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" (UID: "1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7"). InnerVolumeSpecName "kube-api-access-mc7fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.336068 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.336101 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc7fl\" (UniqueName: \"kubernetes.io/projected/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-kube-api-access-mc7fl\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.336110 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.573785 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" event={"ID":"e04f0ae5-20a3-47c1-a877-d717d8d7feb8","Type":"ContainerStarted","Data":"701c50045a3d60c72460d538d70bef6c4fd48b665212314e4324e9ed92d936e8"} Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.582823 4725 generic.go:334] "Generic (PLEG): container finished" podID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" containerID="78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6" exitCode=0 Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.582877 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlvfv" event={"ID":"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7","Type":"ContainerDied","Data":"78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6"} Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.582908 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlvfv" event={"ID":"1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7","Type":"ContainerDied","Data":"dbf1066309c1223d5aaacf35a4a6024a1097d6cd28cefb7a506857447bfe5d99"} Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.582927 4725 scope.go:117] "RemoveContainer" containerID="78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.582962 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlvfv" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.598062 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" podStartSLOduration=2.136981011 podStartE2EDuration="2.598042453s" podCreationTimestamp="2025-10-02 12:00:48 +0000 UTC" firstStartedPulling="2025-10-02 12:00:49.503713003 +0000 UTC m=+1969.411212506" lastFinishedPulling="2025-10-02 12:00:49.964774485 +0000 UTC m=+1969.872273948" observedRunningTime="2025-10-02 12:00:50.5938886 +0000 UTC m=+1970.501388073" watchObservedRunningTime="2025-10-02 12:00:50.598042453 +0000 UTC m=+1970.505541916" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.636863 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlvfv"] Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.640348 4725 scope.go:117] "RemoveContainer" containerID="ba1a3e74cd9c3cc46cbbe9a88505b10f872827c5ddb6614598fd1378d6b80e3e" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.648479 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlvfv"] Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.660369 4725 scope.go:117] "RemoveContainer" containerID="1d605ce94c406596e03d79d9809c9189bf1ca23644f083ac5d0fbee18f61160b" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.683072 4725 scope.go:117] "RemoveContainer" containerID="78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6" Oct 02 12:00:50 crc kubenswrapper[4725]: E1002 12:00:50.683903 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6\": container with ID starting with 78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6 not found: ID does not exist" containerID="78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.683931 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6"} err="failed to get container status \"78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6\": rpc error: code = NotFound desc = could not find container \"78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6\": container with ID starting with 78257e9d5def59f9106c856059a32dfbeec085b1d5c3efe069cdb23f545d14d6 not found: ID does not exist" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.683950 4725 scope.go:117] "RemoveContainer" containerID="ba1a3e74cd9c3cc46cbbe9a88505b10f872827c5ddb6614598fd1378d6b80e3e" Oct 02 12:00:50 crc kubenswrapper[4725]: E1002 12:00:50.684182 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba1a3e74cd9c3cc46cbbe9a88505b10f872827c5ddb6614598fd1378d6b80e3e\": container with ID starting with ba1a3e74cd9c3cc46cbbe9a88505b10f872827c5ddb6614598fd1378d6b80e3e not found: ID does not exist" containerID="ba1a3e74cd9c3cc46cbbe9a88505b10f872827c5ddb6614598fd1378d6b80e3e" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.684199 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba1a3e74cd9c3cc46cbbe9a88505b10f872827c5ddb6614598fd1378d6b80e3e"} err="failed to get container status \"ba1a3e74cd9c3cc46cbbe9a88505b10f872827c5ddb6614598fd1378d6b80e3e\": rpc error: code = NotFound desc = could not find container \"ba1a3e74cd9c3cc46cbbe9a88505b10f872827c5ddb6614598fd1378d6b80e3e\": container with ID starting with ba1a3e74cd9c3cc46cbbe9a88505b10f872827c5ddb6614598fd1378d6b80e3e not found: ID does not exist" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.684213 4725 scope.go:117] "RemoveContainer" containerID="1d605ce94c406596e03d79d9809c9189bf1ca23644f083ac5d0fbee18f61160b" Oct 02 12:00:50 crc kubenswrapper[4725]: E1002 12:00:50.684402 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d605ce94c406596e03d79d9809c9189bf1ca23644f083ac5d0fbee18f61160b\": container with ID starting with 1d605ce94c406596e03d79d9809c9189bf1ca23644f083ac5d0fbee18f61160b not found: ID does not exist" containerID="1d605ce94c406596e03d79d9809c9189bf1ca23644f083ac5d0fbee18f61160b" Oct 02 12:00:50 crc kubenswrapper[4725]: I1002 12:00:50.684418 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d605ce94c406596e03d79d9809c9189bf1ca23644f083ac5d0fbee18f61160b"} err="failed to get container status \"1d605ce94c406596e03d79d9809c9189bf1ca23644f083ac5d0fbee18f61160b\": rpc error: code = NotFound desc = could not find container \"1d605ce94c406596e03d79d9809c9189bf1ca23644f083ac5d0fbee18f61160b\": container with ID starting with 1d605ce94c406596e03d79d9809c9189bf1ca23644f083ac5d0fbee18f61160b not found: ID does not exist" Oct 02 12:00:51 crc kubenswrapper[4725]: I1002 12:00:51.280654 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" path="/var/lib/kubelet/pods/1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7/volumes" Oct 02 12:00:59 crc kubenswrapper[4725]: I1002 12:00:59.267790 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 12:00:59 crc kubenswrapper[4725]: E1002 12:00:59.268746 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:00:59 crc kubenswrapper[4725]: I1002 12:00:59.687608 4725 generic.go:334] "Generic (PLEG): container finished" podID="e04f0ae5-20a3-47c1-a877-d717d8d7feb8" containerID="701c50045a3d60c72460d538d70bef6c4fd48b665212314e4324e9ed92d936e8" exitCode=0 Oct 02 12:00:59 crc kubenswrapper[4725]: I1002 12:00:59.687689 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" event={"ID":"e04f0ae5-20a3-47c1-a877-d717d8d7feb8","Type":"ContainerDied","Data":"701c50045a3d60c72460d538d70bef6c4fd48b665212314e4324e9ed92d936e8"} Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.159425 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29323441-5xvm4"] Oct 02 12:01:00 crc kubenswrapper[4725]: E1002 12:01:00.159848 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" containerName="extract-utilities" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.159872 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" containerName="extract-utilities" Oct 02 12:01:00 crc kubenswrapper[4725]: E1002 12:01:00.159885 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" containerName="registry-server" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.159893 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" containerName="registry-server" Oct 02 12:01:00 crc kubenswrapper[4725]: E1002 12:01:00.159907 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" containerName="extract-content" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.159915 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" containerName="extract-content" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.160158 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b4dbd52-4604-49c4-9fc3-79f1f27dc0f7" containerName="registry-server" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.160918 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.179158 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29323441-5xvm4"] Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.248428 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-combined-ca-bundle\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.248802 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2wdv\" (UniqueName: \"kubernetes.io/projected/7aa98170-54f3-4694-95d1-22b25f1512ba-kube-api-access-g2wdv\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.248889 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-config-data\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.248987 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-fernet-keys\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.350348 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-fernet-keys\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.350516 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-combined-ca-bundle\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.350603 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2wdv\" (UniqueName: \"kubernetes.io/projected/7aa98170-54f3-4694-95d1-22b25f1512ba-kube-api-access-g2wdv\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.350624 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-config-data\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.357970 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-combined-ca-bundle\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.360166 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-fernet-keys\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.372371 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-config-data\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.373277 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2wdv\" (UniqueName: \"kubernetes.io/projected/7aa98170-54f3-4694-95d1-22b25f1512ba-kube-api-access-g2wdv\") pod \"keystone-cron-29323441-5xvm4\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.482272 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:00 crc kubenswrapper[4725]: I1002 12:01:00.936844 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29323441-5xvm4"] Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.051927 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.182985 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-ssh-key\") pod \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.183347 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqspx\" (UniqueName: \"kubernetes.io/projected/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-kube-api-access-zqspx\") pod \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.183397 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-inventory\") pod \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\" (UID: \"e04f0ae5-20a3-47c1-a877-d717d8d7feb8\") " Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.190378 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-kube-api-access-zqspx" (OuterVolumeSpecName: "kube-api-access-zqspx") pod "e04f0ae5-20a3-47c1-a877-d717d8d7feb8" (UID: "e04f0ae5-20a3-47c1-a877-d717d8d7feb8"). InnerVolumeSpecName "kube-api-access-zqspx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.210137 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-inventory" (OuterVolumeSpecName: "inventory") pod "e04f0ae5-20a3-47c1-a877-d717d8d7feb8" (UID: "e04f0ae5-20a3-47c1-a877-d717d8d7feb8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.213363 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e04f0ae5-20a3-47c1-a877-d717d8d7feb8" (UID: "e04f0ae5-20a3-47c1-a877-d717d8d7feb8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.288902 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.288939 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.288951 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqspx\" (UniqueName: \"kubernetes.io/projected/e04f0ae5-20a3-47c1-a877-d717d8d7feb8-kube-api-access-zqspx\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.704895 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" event={"ID":"e04f0ae5-20a3-47c1-a877-d717d8d7feb8","Type":"ContainerDied","Data":"f9dcfc7571ece28bfa2103d3fe2813188ed4565d7ca696ec015b7078686ffbdc"} Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.704938 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9dcfc7571ece28bfa2103d3fe2813188ed4565d7ca696ec015b7078686ffbdc" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.704991 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-472wk" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.706660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323441-5xvm4" event={"ID":"7aa98170-54f3-4694-95d1-22b25f1512ba","Type":"ContainerStarted","Data":"b371e4925940ae13bb41e7bb0523ef5ea6715490f426af37b726c238665712d1"} Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.706730 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323441-5xvm4" event={"ID":"7aa98170-54f3-4694-95d1-22b25f1512ba","Type":"ContainerStarted","Data":"bab9c40860d32c25da6fc0eb428a6fb6af5281595758521b7e81b4cb4e4ff2e6"} Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.734550 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29323441-5xvm4" podStartSLOduration=1.734535233 podStartE2EDuration="1.734535233s" podCreationTimestamp="2025-10-02 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 12:01:01.729691193 +0000 UTC m=+1981.637190656" watchObservedRunningTime="2025-10-02 12:01:01.734535233 +0000 UTC m=+1981.642034696" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.810581 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68"] Oct 02 12:01:01 crc kubenswrapper[4725]: E1002 12:01:01.811172 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04f0ae5-20a3-47c1-a877-d717d8d7feb8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.811245 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04f0ae5-20a3-47c1-a877-d717d8d7feb8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.811500 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04f0ae5-20a3-47c1-a877-d717d8d7feb8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.812349 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.815368 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.815432 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.815448 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.818780 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.818970 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.819584 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.820453 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.822432 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.824637 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68"] Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911376 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911422 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911446 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911473 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh4rw\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-kube-api-access-xh4rw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911539 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911572 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911623 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911651 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911675 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911700 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911750 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911766 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911794 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:01 crc kubenswrapper[4725]: I1002 12:01:01.911824 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.013385 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.013705 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.013743 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.013779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh4rw\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-kube-api-access-xh4rw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.013811 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.013831 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.013888 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.013917 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.013940 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.013963 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.013995 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.014010 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.014038 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.014070 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.018581 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.018686 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.019086 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.020414 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.021317 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.022197 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.022756 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.023895 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.027980 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.032453 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.037402 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.038037 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.038662 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.038882 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh4rw\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-kube-api-access-xh4rw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hgh68\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.132543 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:02 crc kubenswrapper[4725]: I1002 12:01:02.710357 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68"] Oct 02 12:01:02 crc kubenswrapper[4725]: W1002 12:01:02.710550 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72c6c1e1_6428_4023_b56e_ee525bc50c65.slice/crio-60f685e1a66bc193f3c895cfd4cc30ab04d90b130d080b5d456c2e5154165633 WatchSource:0}: Error finding container 60f685e1a66bc193f3c895cfd4cc30ab04d90b130d080b5d456c2e5154165633: Status 404 returned error can't find the container with id 60f685e1a66bc193f3c895cfd4cc30ab04d90b130d080b5d456c2e5154165633 Oct 02 12:01:03 crc kubenswrapper[4725]: I1002 12:01:03.724449 4725 generic.go:334] "Generic (PLEG): container finished" podID="7aa98170-54f3-4694-95d1-22b25f1512ba" containerID="b371e4925940ae13bb41e7bb0523ef5ea6715490f426af37b726c238665712d1" exitCode=0 Oct 02 12:01:03 crc kubenswrapper[4725]: I1002 12:01:03.724549 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323441-5xvm4" event={"ID":"7aa98170-54f3-4694-95d1-22b25f1512ba","Type":"ContainerDied","Data":"b371e4925940ae13bb41e7bb0523ef5ea6715490f426af37b726c238665712d1"} Oct 02 12:01:03 crc kubenswrapper[4725]: I1002 12:01:03.728068 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" event={"ID":"72c6c1e1-6428-4023-b56e-ee525bc50c65","Type":"ContainerStarted","Data":"60f685e1a66bc193f3c895cfd4cc30ab04d90b130d080b5d456c2e5154165633"} Oct 02 12:01:04 crc kubenswrapper[4725]: I1002 12:01:04.740674 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" event={"ID":"72c6c1e1-6428-4023-b56e-ee525bc50c65","Type":"ContainerStarted","Data":"644ad6a6fd9685fb9befa0b6701fca8710524ee7c4eae921a85e065ad2ea16ee"} Oct 02 12:01:04 crc kubenswrapper[4725]: I1002 12:01:04.775952 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" podStartSLOduration=2.6976658860000002 podStartE2EDuration="3.775933222s" podCreationTimestamp="2025-10-02 12:01:01 +0000 UTC" firstStartedPulling="2025-10-02 12:01:02.713284074 +0000 UTC m=+1982.620783537" lastFinishedPulling="2025-10-02 12:01:03.79155141 +0000 UTC m=+1983.699050873" observedRunningTime="2025-10-02 12:01:04.772534051 +0000 UTC m=+1984.680033524" watchObservedRunningTime="2025-10-02 12:01:04.775933222 +0000 UTC m=+1984.683432695" Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.085291 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.179620 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2wdv\" (UniqueName: \"kubernetes.io/projected/7aa98170-54f3-4694-95d1-22b25f1512ba-kube-api-access-g2wdv\") pod \"7aa98170-54f3-4694-95d1-22b25f1512ba\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.179689 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-fernet-keys\") pod \"7aa98170-54f3-4694-95d1-22b25f1512ba\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.179818 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-combined-ca-bundle\") pod \"7aa98170-54f3-4694-95d1-22b25f1512ba\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.179967 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-config-data\") pod \"7aa98170-54f3-4694-95d1-22b25f1512ba\" (UID: \"7aa98170-54f3-4694-95d1-22b25f1512ba\") " Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.185597 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa98170-54f3-4694-95d1-22b25f1512ba-kube-api-access-g2wdv" (OuterVolumeSpecName: "kube-api-access-g2wdv") pod "7aa98170-54f3-4694-95d1-22b25f1512ba" (UID: "7aa98170-54f3-4694-95d1-22b25f1512ba"). InnerVolumeSpecName "kube-api-access-g2wdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.187168 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7aa98170-54f3-4694-95d1-22b25f1512ba" (UID: "7aa98170-54f3-4694-95d1-22b25f1512ba"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.209140 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7aa98170-54f3-4694-95d1-22b25f1512ba" (UID: "7aa98170-54f3-4694-95d1-22b25f1512ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.247654 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-config-data" (OuterVolumeSpecName: "config-data") pod "7aa98170-54f3-4694-95d1-22b25f1512ba" (UID: "7aa98170-54f3-4694-95d1-22b25f1512ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.281977 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.282009 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2wdv\" (UniqueName: \"kubernetes.io/projected/7aa98170-54f3-4694-95d1-22b25f1512ba-kube-api-access-g2wdv\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.282022 4725 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.282034 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa98170-54f3-4694-95d1-22b25f1512ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.752879 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323441-5xvm4" Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.752893 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323441-5xvm4" event={"ID":"7aa98170-54f3-4694-95d1-22b25f1512ba","Type":"ContainerDied","Data":"bab9c40860d32c25da6fc0eb428a6fb6af5281595758521b7e81b4cb4e4ff2e6"} Oct 02 12:01:05 crc kubenswrapper[4725]: I1002 12:01:05.753043 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bab9c40860d32c25da6fc0eb428a6fb6af5281595758521b7e81b4cb4e4ff2e6" Oct 02 12:01:08 crc kubenswrapper[4725]: E1002 12:01:08.120519 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice/crio-f9dcfc7571ece28bfa2103d3fe2813188ed4565d7ca696ec015b7078686ffbdc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice\": RecentStats: unable to find data in memory cache]" Oct 02 12:01:14 crc kubenswrapper[4725]: I1002 12:01:14.269383 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 12:01:14 crc kubenswrapper[4725]: E1002 12:01:14.270477 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:01:18 crc kubenswrapper[4725]: E1002 12:01:18.415675 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice/crio-f9dcfc7571ece28bfa2103d3fe2813188ed4565d7ca696ec015b7078686ffbdc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice\": RecentStats: unable to find data in memory cache]" Oct 02 12:01:28 crc kubenswrapper[4725]: I1002 12:01:28.268099 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 12:01:28 crc kubenswrapper[4725]: E1002 12:01:28.662293 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice/crio-f9dcfc7571ece28bfa2103d3fe2813188ed4565d7ca696ec015b7078686ffbdc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice\": RecentStats: unable to find data in memory cache]" Oct 02 12:01:29 crc kubenswrapper[4725]: I1002 12:01:29.036672 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"2f1d66b556cbbfca3da1b591e3249893cafa7d6f694a715ba2777c2e8980f2f6"} Oct 02 12:01:38 crc kubenswrapper[4725]: E1002 12:01:38.910612 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice/crio-f9dcfc7571ece28bfa2103d3fe2813188ed4565d7ca696ec015b7078686ffbdc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice\": RecentStats: unable to find data in memory cache]" Oct 02 12:01:48 crc kubenswrapper[4725]: I1002 12:01:48.254900 4725 generic.go:334] "Generic (PLEG): container finished" podID="72c6c1e1-6428-4023-b56e-ee525bc50c65" containerID="644ad6a6fd9685fb9befa0b6701fca8710524ee7c4eae921a85e065ad2ea16ee" exitCode=0 Oct 02 12:01:48 crc kubenswrapper[4725]: I1002 12:01:48.254968 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" event={"ID":"72c6c1e1-6428-4023-b56e-ee525bc50c65","Type":"ContainerDied","Data":"644ad6a6fd9685fb9befa0b6701fca8710524ee7c4eae921a85e065ad2ea16ee"} Oct 02 12:01:49 crc kubenswrapper[4725]: E1002 12:01:49.172022 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice/crio-f9dcfc7571ece28bfa2103d3fe2813188ed4565d7ca696ec015b7078686ffbdc\": RecentStats: unable to find data in memory cache]" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.708561 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809180 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809241 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809280 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-nova-combined-ca-bundle\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809317 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh4rw\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-kube-api-access-xh4rw\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809376 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-neutron-metadata-combined-ca-bundle\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809403 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-telemetry-combined-ca-bundle\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809453 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-libvirt-combined-ca-bundle\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809477 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ssh-key\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809521 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-inventory\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809571 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-repo-setup-combined-ca-bundle\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809610 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-ovn-default-certs-0\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809631 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ovn-combined-ca-bundle\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809706 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-bootstrap-combined-ca-bundle\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.809778 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"72c6c1e1-6428-4023-b56e-ee525bc50c65\" (UID: \"72c6c1e1-6428-4023-b56e-ee525bc50c65\") " Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.817028 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-kube-api-access-xh4rw" (OuterVolumeSpecName: "kube-api-access-xh4rw") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "kube-api-access-xh4rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.817047 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.819329 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.819415 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.819991 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.820124 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.820925 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.821177 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.821591 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.824008 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.824342 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.824977 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.847817 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-inventory" (OuterVolumeSpecName: "inventory") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.848512 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "72c6c1e1-6428-4023-b56e-ee525bc50c65" (UID: "72c6c1e1-6428-4023-b56e-ee525bc50c65"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.912255 4725 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.912489 4725 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.912594 4725 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.912672 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.912764 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.912851 4725 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.912924 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.913002 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.913075 4725 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.913145 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.913224 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.913293 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.913360 4725 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c6c1e1-6428-4023-b56e-ee525bc50c65-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:49 crc kubenswrapper[4725]: I1002 12:01:49.913439 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh4rw\" (UniqueName: \"kubernetes.io/projected/72c6c1e1-6428-4023-b56e-ee525bc50c65-kube-api-access-xh4rw\") on node \"crc\" DevicePath \"\"" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.279866 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" event={"ID":"72c6c1e1-6428-4023-b56e-ee525bc50c65","Type":"ContainerDied","Data":"60f685e1a66bc193f3c895cfd4cc30ab04d90b130d080b5d456c2e5154165633"} Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.279934 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60f685e1a66bc193f3c895cfd4cc30ab04d90b130d080b5d456c2e5154165633" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.279944 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hgh68" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.428026 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t"] Oct 02 12:01:50 crc kubenswrapper[4725]: E1002 12:01:50.428493 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c6c1e1-6428-4023-b56e-ee525bc50c65" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.428532 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c6c1e1-6428-4023-b56e-ee525bc50c65" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 02 12:01:50 crc kubenswrapper[4725]: E1002 12:01:50.428568 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa98170-54f3-4694-95d1-22b25f1512ba" containerName="keystone-cron" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.428575 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa98170-54f3-4694-95d1-22b25f1512ba" containerName="keystone-cron" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.428778 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c6c1e1-6428-4023-b56e-ee525bc50c65" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.428813 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa98170-54f3-4694-95d1-22b25f1512ba" containerName="keystone-cron" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.429924 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.431886 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.433092 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.433176 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.433093 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.436250 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.446487 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t"] Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.525105 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.525310 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.525357 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.525399 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.525426 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n799k\" (UniqueName: \"kubernetes.io/projected/db895bfa-9a45-45f3-8214-cf8c9e1a1351-kube-api-access-n799k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.626900 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.626981 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.627021 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.627066 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.627092 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n799k\" (UniqueName: \"kubernetes.io/projected/db895bfa-9a45-45f3-8214-cf8c9e1a1351-kube-api-access-n799k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.628082 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.631832 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.632660 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.633859 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.656496 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n799k\" (UniqueName: \"kubernetes.io/projected/db895bfa-9a45-45f3-8214-cf8c9e1a1351-kube-api-access-n799k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ctk5t\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:50 crc kubenswrapper[4725]: I1002 12:01:50.759633 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:01:51 crc kubenswrapper[4725]: I1002 12:01:51.333177 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t"] Oct 02 12:01:51 crc kubenswrapper[4725]: I1002 12:01:51.343973 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 12:01:52 crc kubenswrapper[4725]: I1002 12:01:52.296165 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" event={"ID":"db895bfa-9a45-45f3-8214-cf8c9e1a1351","Type":"ContainerStarted","Data":"89bac74f5d75611bc8798dc586d6a7be4029db73eef1688de199eab8784bf6f5"} Oct 02 12:01:53 crc kubenswrapper[4725]: I1002 12:01:53.315149 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" event={"ID":"db895bfa-9a45-45f3-8214-cf8c9e1a1351","Type":"ContainerStarted","Data":"a24333a962924f0e152c2704e11145af56b99bbb1b12b0e735e1a0ad7822e5a6"} Oct 02 12:01:59 crc kubenswrapper[4725]: E1002 12:01:59.401308 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice/crio-f9dcfc7571ece28bfa2103d3fe2813188ed4565d7ca696ec015b7078686ffbdc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04f0ae5_20a3_47c1_a877_d717d8d7feb8.slice\": RecentStats: unable to find data in memory cache]" Oct 02 12:02:54 crc kubenswrapper[4725]: I1002 12:02:54.936076 4725 generic.go:334] "Generic (PLEG): container finished" podID="db895bfa-9a45-45f3-8214-cf8c9e1a1351" containerID="a24333a962924f0e152c2704e11145af56b99bbb1b12b0e735e1a0ad7822e5a6" exitCode=0 Oct 02 12:02:54 crc kubenswrapper[4725]: I1002 12:02:54.936267 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" event={"ID":"db895bfa-9a45-45f3-8214-cf8c9e1a1351","Type":"ContainerDied","Data":"a24333a962924f0e152c2704e11145af56b99bbb1b12b0e735e1a0ad7822e5a6"} Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.429799 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.611347 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n799k\" (UniqueName: \"kubernetes.io/projected/db895bfa-9a45-45f3-8214-cf8c9e1a1351-kube-api-access-n799k\") pod \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.611435 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovncontroller-config-0\") pod \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.611489 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-inventory\") pod \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.611520 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovn-combined-ca-bundle\") pod \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.611553 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ssh-key\") pod \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\" (UID: \"db895bfa-9a45-45f3-8214-cf8c9e1a1351\") " Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.624145 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "db895bfa-9a45-45f3-8214-cf8c9e1a1351" (UID: "db895bfa-9a45-45f3-8214-cf8c9e1a1351"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.624299 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db895bfa-9a45-45f3-8214-cf8c9e1a1351-kube-api-access-n799k" (OuterVolumeSpecName: "kube-api-access-n799k") pod "db895bfa-9a45-45f3-8214-cf8c9e1a1351" (UID: "db895bfa-9a45-45f3-8214-cf8c9e1a1351"). InnerVolumeSpecName "kube-api-access-n799k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.641166 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-inventory" (OuterVolumeSpecName: "inventory") pod "db895bfa-9a45-45f3-8214-cf8c9e1a1351" (UID: "db895bfa-9a45-45f3-8214-cf8c9e1a1351"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.648008 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "db895bfa-9a45-45f3-8214-cf8c9e1a1351" (UID: "db895bfa-9a45-45f3-8214-cf8c9e1a1351"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.663285 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "db895bfa-9a45-45f3-8214-cf8c9e1a1351" (UID: "db895bfa-9a45-45f3-8214-cf8c9e1a1351"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.713680 4725 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.713892 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.713987 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.714044 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db895bfa-9a45-45f3-8214-cf8c9e1a1351-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.714097 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n799k\" (UniqueName: \"kubernetes.io/projected/db895bfa-9a45-45f3-8214-cf8c9e1a1351-kube-api-access-n799k\") on node \"crc\" DevicePath \"\"" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.958357 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" event={"ID":"db895bfa-9a45-45f3-8214-cf8c9e1a1351","Type":"ContainerDied","Data":"89bac74f5d75611bc8798dc586d6a7be4029db73eef1688de199eab8784bf6f5"} Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.958403 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89bac74f5d75611bc8798dc586d6a7be4029db73eef1688de199eab8784bf6f5" Oct 02 12:02:56 crc kubenswrapper[4725]: I1002 12:02:56.958472 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ctk5t" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.075104 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc"] Oct 02 12:02:57 crc kubenswrapper[4725]: E1002 12:02:57.075910 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db895bfa-9a45-45f3-8214-cf8c9e1a1351" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.075937 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="db895bfa-9a45-45f3-8214-cf8c9e1a1351" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.076177 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="db895bfa-9a45-45f3-8214-cf8c9e1a1351" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.077146 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.079065 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.080059 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.080325 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.080665 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.080869 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.081081 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.098980 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc"] Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.223892 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqsjl\" (UniqueName: \"kubernetes.io/projected/c4d85400-0823-4e3c-b7b2-f7902c817c33-kube-api-access-lqsjl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.224025 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.224115 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.224314 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.224503 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.224629 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.327016 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqsjl\" (UniqueName: \"kubernetes.io/projected/c4d85400-0823-4e3c-b7b2-f7902c817c33-kube-api-access-lqsjl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.327077 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.327113 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.327148 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.327195 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.327229 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.334064 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.339389 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.339418 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.339652 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.340067 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.348769 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqsjl\" (UniqueName: \"kubernetes.io/projected/c4d85400-0823-4e3c-b7b2-f7902c817c33-kube-api-access-lqsjl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.404049 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:02:57 crc kubenswrapper[4725]: I1002 12:02:57.970174 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc"] Oct 02 12:02:58 crc kubenswrapper[4725]: I1002 12:02:58.976309 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" event={"ID":"c4d85400-0823-4e3c-b7b2-f7902c817c33","Type":"ContainerStarted","Data":"59707c37c94bd631f9b07909eb7f5f044b6fac2ca9300d7ef59a3ec601ee358a"} Oct 02 12:02:58 crc kubenswrapper[4725]: I1002 12:02:58.976351 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" event={"ID":"c4d85400-0823-4e3c-b7b2-f7902c817c33","Type":"ContainerStarted","Data":"3f3b92daf2ddb91cd35bc8f73829521a2a1ce6abcde4ee9246f2912e4d1f1278"} Oct 02 12:02:58 crc kubenswrapper[4725]: I1002 12:02:58.993428 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" podStartSLOduration=1.430689009 podStartE2EDuration="1.993408038s" podCreationTimestamp="2025-10-02 12:02:57 +0000 UTC" firstStartedPulling="2025-10-02 12:02:57.969325938 +0000 UTC m=+2097.876825401" lastFinishedPulling="2025-10-02 12:02:58.532044967 +0000 UTC m=+2098.439544430" observedRunningTime="2025-10-02 12:02:58.991397444 +0000 UTC m=+2098.898896907" watchObservedRunningTime="2025-10-02 12:02:58.993408038 +0000 UTC m=+2098.900907501" Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.259836 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rzhhw"] Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.262514 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.280666 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rzhhw"] Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.405145 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hklbt\" (UniqueName: \"kubernetes.io/projected/902750cc-ccf5-49a0-95c1-607f788de133-kube-api-access-hklbt\") pod \"redhat-operators-rzhhw\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.405279 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-utilities\") pod \"redhat-operators-rzhhw\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.405353 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-catalog-content\") pod \"redhat-operators-rzhhw\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.506928 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-catalog-content\") pod \"redhat-operators-rzhhw\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.507080 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hklbt\" (UniqueName: \"kubernetes.io/projected/902750cc-ccf5-49a0-95c1-607f788de133-kube-api-access-hklbt\") pod \"redhat-operators-rzhhw\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.507125 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-utilities\") pod \"redhat-operators-rzhhw\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.507591 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-catalog-content\") pod \"redhat-operators-rzhhw\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.507622 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-utilities\") pod \"redhat-operators-rzhhw\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.527838 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hklbt\" (UniqueName: \"kubernetes.io/projected/902750cc-ccf5-49a0-95c1-607f788de133-kube-api-access-hklbt\") pod \"redhat-operators-rzhhw\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:14 crc kubenswrapper[4725]: I1002 12:03:14.583326 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:15 crc kubenswrapper[4725]: I1002 12:03:15.109472 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rzhhw"] Oct 02 12:03:15 crc kubenswrapper[4725]: I1002 12:03:15.148968 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhhw" event={"ID":"902750cc-ccf5-49a0-95c1-607f788de133","Type":"ContainerStarted","Data":"2c137247043af1cdf8762b5ffd3ef7f40c8da65b7735ae0d1faa321d1e1dba5c"} Oct 02 12:03:16 crc kubenswrapper[4725]: I1002 12:03:16.161788 4725 generic.go:334] "Generic (PLEG): container finished" podID="902750cc-ccf5-49a0-95c1-607f788de133" containerID="abd25676d68817b5a6ca9c1e86ffa9cceead956a20758bcb8752ada9301e1e24" exitCode=0 Oct 02 12:03:16 crc kubenswrapper[4725]: I1002 12:03:16.161848 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhhw" event={"ID":"902750cc-ccf5-49a0-95c1-607f788de133","Type":"ContainerDied","Data":"abd25676d68817b5a6ca9c1e86ffa9cceead956a20758bcb8752ada9301e1e24"} Oct 02 12:03:18 crc kubenswrapper[4725]: I1002 12:03:18.183003 4725 generic.go:334] "Generic (PLEG): container finished" podID="902750cc-ccf5-49a0-95c1-607f788de133" containerID="afd8e7fa1fcc0315c8a84ef100cf620a5f964dd3f827ec7dd3c0076b3bd30794" exitCode=0 Oct 02 12:03:18 crc kubenswrapper[4725]: I1002 12:03:18.183159 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhhw" event={"ID":"902750cc-ccf5-49a0-95c1-607f788de133","Type":"ContainerDied","Data":"afd8e7fa1fcc0315c8a84ef100cf620a5f964dd3f827ec7dd3c0076b3bd30794"} Oct 02 12:03:19 crc kubenswrapper[4725]: I1002 12:03:19.197251 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhhw" event={"ID":"902750cc-ccf5-49a0-95c1-607f788de133","Type":"ContainerStarted","Data":"3ded45fbd003689f842bdd0c564343000fb84932e52c82d9aabe2920cc3cf82e"} Oct 02 12:03:19 crc kubenswrapper[4725]: I1002 12:03:19.227859 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rzhhw" podStartSLOduration=2.751738551 podStartE2EDuration="5.227837812s" podCreationTimestamp="2025-10-02 12:03:14 +0000 UTC" firstStartedPulling="2025-10-02 12:03:16.163350421 +0000 UTC m=+2116.070849884" lastFinishedPulling="2025-10-02 12:03:18.639449682 +0000 UTC m=+2118.546949145" observedRunningTime="2025-10-02 12:03:19.22033858 +0000 UTC m=+2119.127838063" watchObservedRunningTime="2025-10-02 12:03:19.227837812 +0000 UTC m=+2119.135337285" Oct 02 12:03:24 crc kubenswrapper[4725]: I1002 12:03:24.583904 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:24 crc kubenswrapper[4725]: I1002 12:03:24.584475 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:24 crc kubenswrapper[4725]: I1002 12:03:24.650622 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:25 crc kubenswrapper[4725]: I1002 12:03:25.295825 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:25 crc kubenswrapper[4725]: I1002 12:03:25.364417 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rzhhw"] Oct 02 12:03:27 crc kubenswrapper[4725]: I1002 12:03:27.268221 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rzhhw" podUID="902750cc-ccf5-49a0-95c1-607f788de133" containerName="registry-server" containerID="cri-o://3ded45fbd003689f842bdd0c564343000fb84932e52c82d9aabe2920cc3cf82e" gracePeriod=2 Oct 02 12:03:28 crc kubenswrapper[4725]: I1002 12:03:28.281497 4725 generic.go:334] "Generic (PLEG): container finished" podID="902750cc-ccf5-49a0-95c1-607f788de133" containerID="3ded45fbd003689f842bdd0c564343000fb84932e52c82d9aabe2920cc3cf82e" exitCode=0 Oct 02 12:03:28 crc kubenswrapper[4725]: I1002 12:03:28.281832 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhhw" event={"ID":"902750cc-ccf5-49a0-95c1-607f788de133","Type":"ContainerDied","Data":"3ded45fbd003689f842bdd0c564343000fb84932e52c82d9aabe2920cc3cf82e"} Oct 02 12:03:28 crc kubenswrapper[4725]: I1002 12:03:28.412005 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:28 crc kubenswrapper[4725]: I1002 12:03:28.615118 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hklbt\" (UniqueName: \"kubernetes.io/projected/902750cc-ccf5-49a0-95c1-607f788de133-kube-api-access-hklbt\") pod \"902750cc-ccf5-49a0-95c1-607f788de133\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " Oct 02 12:03:28 crc kubenswrapper[4725]: I1002 12:03:28.615255 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-catalog-content\") pod \"902750cc-ccf5-49a0-95c1-607f788de133\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " Oct 02 12:03:28 crc kubenswrapper[4725]: I1002 12:03:28.615403 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-utilities\") pod \"902750cc-ccf5-49a0-95c1-607f788de133\" (UID: \"902750cc-ccf5-49a0-95c1-607f788de133\") " Oct 02 12:03:28 crc kubenswrapper[4725]: I1002 12:03:28.616269 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-utilities" (OuterVolumeSpecName: "utilities") pod "902750cc-ccf5-49a0-95c1-607f788de133" (UID: "902750cc-ccf5-49a0-95c1-607f788de133"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:03:28 crc kubenswrapper[4725]: I1002 12:03:28.626959 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902750cc-ccf5-49a0-95c1-607f788de133-kube-api-access-hklbt" (OuterVolumeSpecName: "kube-api-access-hklbt") pod "902750cc-ccf5-49a0-95c1-607f788de133" (UID: "902750cc-ccf5-49a0-95c1-607f788de133"). InnerVolumeSpecName "kube-api-access-hklbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:03:28 crc kubenswrapper[4725]: I1002 12:03:28.717511 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hklbt\" (UniqueName: \"kubernetes.io/projected/902750cc-ccf5-49a0-95c1-607f788de133-kube-api-access-hklbt\") on node \"crc\" DevicePath \"\"" Oct 02 12:03:28 crc kubenswrapper[4725]: I1002 12:03:28.717798 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:03:29 crc kubenswrapper[4725]: I1002 12:03:29.296966 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhhw" event={"ID":"902750cc-ccf5-49a0-95c1-607f788de133","Type":"ContainerDied","Data":"2c137247043af1cdf8762b5ffd3ef7f40c8da65b7735ae0d1faa321d1e1dba5c"} Oct 02 12:03:29 crc kubenswrapper[4725]: I1002 12:03:29.297046 4725 scope.go:117] "RemoveContainer" containerID="3ded45fbd003689f842bdd0c564343000fb84932e52c82d9aabe2920cc3cf82e" Oct 02 12:03:29 crc kubenswrapper[4725]: I1002 12:03:29.297065 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzhhw" Oct 02 12:03:29 crc kubenswrapper[4725]: I1002 12:03:29.341859 4725 scope.go:117] "RemoveContainer" containerID="afd8e7fa1fcc0315c8a84ef100cf620a5f964dd3f827ec7dd3c0076b3bd30794" Oct 02 12:03:29 crc kubenswrapper[4725]: I1002 12:03:29.388797 4725 scope.go:117] "RemoveContainer" containerID="abd25676d68817b5a6ca9c1e86ffa9cceead956a20758bcb8752ada9301e1e24" Oct 02 12:03:30 crc kubenswrapper[4725]: I1002 12:03:30.035896 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "902750cc-ccf5-49a0-95c1-607f788de133" (UID: "902750cc-ccf5-49a0-95c1-607f788de133"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:03:30 crc kubenswrapper[4725]: I1002 12:03:30.045042 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902750cc-ccf5-49a0-95c1-607f788de133-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:03:30 crc kubenswrapper[4725]: I1002 12:03:30.239395 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rzhhw"] Oct 02 12:03:30 crc kubenswrapper[4725]: I1002 12:03:30.248981 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rzhhw"] Oct 02 12:03:31 crc kubenswrapper[4725]: I1002 12:03:31.298036 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902750cc-ccf5-49a0-95c1-607f788de133" path="/var/lib/kubelet/pods/902750cc-ccf5-49a0-95c1-607f788de133/volumes" Oct 02 12:03:44 crc kubenswrapper[4725]: I1002 12:03:44.978373 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:03:44 crc kubenswrapper[4725]: I1002 12:03:44.979134 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:03:45 crc kubenswrapper[4725]: I1002 12:03:45.477002 4725 generic.go:334] "Generic (PLEG): container finished" podID="c4d85400-0823-4e3c-b7b2-f7902c817c33" containerID="59707c37c94bd631f9b07909eb7f5f044b6fac2ca9300d7ef59a3ec601ee358a" exitCode=0 Oct 02 12:03:45 crc kubenswrapper[4725]: I1002 12:03:45.477047 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" event={"ID":"c4d85400-0823-4e3c-b7b2-f7902c817c33","Type":"ContainerDied","Data":"59707c37c94bd631f9b07909eb7f5f044b6fac2ca9300d7ef59a3ec601ee358a"} Oct 02 12:03:46 crc kubenswrapper[4725]: I1002 12:03:46.920976 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.018287 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c4d85400-0823-4e3c-b7b2-f7902c817c33\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.018356 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-ssh-key\") pod \"c4d85400-0823-4e3c-b7b2-f7902c817c33\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.018414 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqsjl\" (UniqueName: \"kubernetes.io/projected/c4d85400-0823-4e3c-b7b2-f7902c817c33-kube-api-access-lqsjl\") pod \"c4d85400-0823-4e3c-b7b2-f7902c817c33\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.018610 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-inventory\") pod \"c4d85400-0823-4e3c-b7b2-f7902c817c33\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.018636 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-nova-metadata-neutron-config-0\") pod \"c4d85400-0823-4e3c-b7b2-f7902c817c33\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.018660 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-metadata-combined-ca-bundle\") pod \"c4d85400-0823-4e3c-b7b2-f7902c817c33\" (UID: \"c4d85400-0823-4e3c-b7b2-f7902c817c33\") " Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.026366 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c4d85400-0823-4e3c-b7b2-f7902c817c33" (UID: "c4d85400-0823-4e3c-b7b2-f7902c817c33"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.026475 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d85400-0823-4e3c-b7b2-f7902c817c33-kube-api-access-lqsjl" (OuterVolumeSpecName: "kube-api-access-lqsjl") pod "c4d85400-0823-4e3c-b7b2-f7902c817c33" (UID: "c4d85400-0823-4e3c-b7b2-f7902c817c33"). InnerVolumeSpecName "kube-api-access-lqsjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.053311 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c4d85400-0823-4e3c-b7b2-f7902c817c33" (UID: "c4d85400-0823-4e3c-b7b2-f7902c817c33"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.054321 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-inventory" (OuterVolumeSpecName: "inventory") pod "c4d85400-0823-4e3c-b7b2-f7902c817c33" (UID: "c4d85400-0823-4e3c-b7b2-f7902c817c33"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.059144 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c4d85400-0823-4e3c-b7b2-f7902c817c33" (UID: "c4d85400-0823-4e3c-b7b2-f7902c817c33"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.076689 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c4d85400-0823-4e3c-b7b2-f7902c817c33" (UID: "c4d85400-0823-4e3c-b7b2-f7902c817c33"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.120665 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.120716 4725 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.120740 4725 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.120753 4725 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.120762 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4d85400-0823-4e3c-b7b2-f7902c817c33-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.120770 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqsjl\" (UniqueName: \"kubernetes.io/projected/c4d85400-0823-4e3c-b7b2-f7902c817c33-kube-api-access-lqsjl\") on node \"crc\" DevicePath \"\"" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.500047 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" event={"ID":"c4d85400-0823-4e3c-b7b2-f7902c817c33","Type":"ContainerDied","Data":"3f3b92daf2ddb91cd35bc8f73829521a2a1ce6abcde4ee9246f2912e4d1f1278"} Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.500101 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.500108 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f3b92daf2ddb91cd35bc8f73829521a2a1ce6abcde4ee9246f2912e4d1f1278" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.635224 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt"] Oct 02 12:03:47 crc kubenswrapper[4725]: E1002 12:03:47.636244 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902750cc-ccf5-49a0-95c1-607f788de133" containerName="extract-utilities" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.636274 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="902750cc-ccf5-49a0-95c1-607f788de133" containerName="extract-utilities" Oct 02 12:03:47 crc kubenswrapper[4725]: E1002 12:03:47.636298 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902750cc-ccf5-49a0-95c1-607f788de133" containerName="extract-content" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.636307 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="902750cc-ccf5-49a0-95c1-607f788de133" containerName="extract-content" Oct 02 12:03:47 crc kubenswrapper[4725]: E1002 12:03:47.636341 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d85400-0823-4e3c-b7b2-f7902c817c33" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.636350 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d85400-0823-4e3c-b7b2-f7902c817c33" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 02 12:03:47 crc kubenswrapper[4725]: E1002 12:03:47.636368 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902750cc-ccf5-49a0-95c1-607f788de133" containerName="registry-server" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.636376 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="902750cc-ccf5-49a0-95c1-607f788de133" containerName="registry-server" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.636651 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d85400-0823-4e3c-b7b2-f7902c817c33" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.636694 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="902750cc-ccf5-49a0-95c1-607f788de133" containerName="registry-server" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.637694 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.639568 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.640400 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.640493 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.641155 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.641174 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.650168 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt"] Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.735152 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5tp\" (UniqueName: \"kubernetes.io/projected/c525e8cd-4d87-4d2b-9d78-73199eebbbee-kube-api-access-4v5tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.735339 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.735466 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.735544 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.735863 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.837497 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.837672 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.837765 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.837838 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.837900 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v5tp\" (UniqueName: \"kubernetes.io/projected/c525e8cd-4d87-4d2b-9d78-73199eebbbee-kube-api-access-4v5tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.842650 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.843344 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.843901 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.844424 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.868616 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v5tp\" (UniqueName: \"kubernetes.io/projected/c525e8cd-4d87-4d2b-9d78-73199eebbbee-kube-api-access-4v5tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:47 crc kubenswrapper[4725]: I1002 12:03:47.997849 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:03:48 crc kubenswrapper[4725]: I1002 12:03:48.553234 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt"] Oct 02 12:03:49 crc kubenswrapper[4725]: I1002 12:03:49.528148 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" event={"ID":"c525e8cd-4d87-4d2b-9d78-73199eebbbee","Type":"ContainerStarted","Data":"4d4dd1056878ea9949abd7583cc0c61ca2f601dc7b3617b08b6cc7309875cc6b"} Oct 02 12:03:50 crc kubenswrapper[4725]: I1002 12:03:50.540277 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" event={"ID":"c525e8cd-4d87-4d2b-9d78-73199eebbbee","Type":"ContainerStarted","Data":"39742fbe451b66b1ff4d9d8de684d88daa5ae37449a7515460e29df33889a1fa"} Oct 02 12:03:50 crc kubenswrapper[4725]: I1002 12:03:50.572529 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" podStartSLOduration=2.709974549 podStartE2EDuration="3.57250754s" podCreationTimestamp="2025-10-02 12:03:47 +0000 UTC" firstStartedPulling="2025-10-02 12:03:48.561832579 +0000 UTC m=+2148.469332042" lastFinishedPulling="2025-10-02 12:03:49.42436557 +0000 UTC m=+2149.331865033" observedRunningTime="2025-10-02 12:03:50.562357627 +0000 UTC m=+2150.469857130" watchObservedRunningTime="2025-10-02 12:03:50.57250754 +0000 UTC m=+2150.480007013" Oct 02 12:04:14 crc kubenswrapper[4725]: I1002 12:04:14.978330 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:04:14 crc kubenswrapper[4725]: I1002 12:04:14.978933 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:04:44 crc kubenswrapper[4725]: I1002 12:04:44.978295 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:04:44 crc kubenswrapper[4725]: I1002 12:04:44.978932 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:04:44 crc kubenswrapper[4725]: I1002 12:04:44.978987 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 12:04:44 crc kubenswrapper[4725]: I1002 12:04:44.979780 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f1d66b556cbbfca3da1b591e3249893cafa7d6f694a715ba2777c2e8980f2f6"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 12:04:44 crc kubenswrapper[4725]: I1002 12:04:44.979846 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://2f1d66b556cbbfca3da1b591e3249893cafa7d6f694a715ba2777c2e8980f2f6" gracePeriod=600 Oct 02 12:04:46 crc kubenswrapper[4725]: I1002 12:04:46.089195 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="2f1d66b556cbbfca3da1b591e3249893cafa7d6f694a715ba2777c2e8980f2f6" exitCode=0 Oct 02 12:04:46 crc kubenswrapper[4725]: I1002 12:04:46.089274 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"2f1d66b556cbbfca3da1b591e3249893cafa7d6f694a715ba2777c2e8980f2f6"} Oct 02 12:04:46 crc kubenswrapper[4725]: I1002 12:04:46.089856 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a"} Oct 02 12:04:46 crc kubenswrapper[4725]: I1002 12:04:46.089875 4725 scope.go:117] "RemoveContainer" containerID="1456745e91ac8662bbf6cbcc053cd19f21852905142f11af6aa1cda7abf0f46d" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.327766 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p5nwh"] Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.333715 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.342192 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5nwh"] Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.496927 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzsdb\" (UniqueName: \"kubernetes.io/projected/8fdbe1a4-6541-4565-8668-3be664426daa-kube-api-access-tzsdb\") pod \"community-operators-p5nwh\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.496996 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-utilities\") pod \"community-operators-p5nwh\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.497025 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-catalog-content\") pod \"community-operators-p5nwh\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.526072 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cnqpc"] Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.528035 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.536171 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cnqpc"] Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.598426 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzsdb\" (UniqueName: \"kubernetes.io/projected/8fdbe1a4-6541-4565-8668-3be664426daa-kube-api-access-tzsdb\") pod \"community-operators-p5nwh\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.598498 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-utilities\") pod \"community-operators-p5nwh\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.598535 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-catalog-content\") pod \"community-operators-p5nwh\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.599171 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-utilities\") pod \"community-operators-p5nwh\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.599199 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-catalog-content\") pod \"community-operators-p5nwh\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.620744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzsdb\" (UniqueName: \"kubernetes.io/projected/8fdbe1a4-6541-4565-8668-3be664426daa-kube-api-access-tzsdb\") pod \"community-operators-p5nwh\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.663191 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.705521 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-utilities\") pod \"certified-operators-cnqpc\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.705610 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2qf\" (UniqueName: \"kubernetes.io/projected/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-kube-api-access-5t2qf\") pod \"certified-operators-cnqpc\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.705664 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-catalog-content\") pod \"certified-operators-cnqpc\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.807712 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-utilities\") pod \"certified-operators-cnqpc\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.807799 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t2qf\" (UniqueName: \"kubernetes.io/projected/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-kube-api-access-5t2qf\") pod \"certified-operators-cnqpc\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.807839 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-catalog-content\") pod \"certified-operators-cnqpc\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.808309 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-catalog-content\") pod \"certified-operators-cnqpc\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.808976 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-utilities\") pod \"certified-operators-cnqpc\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.843497 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t2qf\" (UniqueName: \"kubernetes.io/projected/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-kube-api-access-5t2qf\") pod \"certified-operators-cnqpc\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:25 crc kubenswrapper[4725]: I1002 12:05:25.850230 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:26 crc kubenswrapper[4725]: I1002 12:05:26.051128 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5nwh"] Oct 02 12:05:26 crc kubenswrapper[4725]: I1002 12:05:26.453914 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cnqpc"] Oct 02 12:05:26 crc kubenswrapper[4725]: W1002 12:05:26.463049 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda77a326e_7b0b_4d62_81f6_b15e7cd27d2d.slice/crio-04d01d4a5d99e316f30720c1a23af7ed53496b7d5191d5e81a963c3d8465d42e WatchSource:0}: Error finding container 04d01d4a5d99e316f30720c1a23af7ed53496b7d5191d5e81a963c3d8465d42e: Status 404 returned error can't find the container with id 04d01d4a5d99e316f30720c1a23af7ed53496b7d5191d5e81a963c3d8465d42e Oct 02 12:05:26 crc kubenswrapper[4725]: I1002 12:05:26.487173 4725 generic.go:334] "Generic (PLEG): container finished" podID="8fdbe1a4-6541-4565-8668-3be664426daa" containerID="4b2a681dd1232d8b1ac289766a8ae10190d022b5b06eff832af9235bc3e8ff5a" exitCode=0 Oct 02 12:05:26 crc kubenswrapper[4725]: I1002 12:05:26.487245 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5nwh" event={"ID":"8fdbe1a4-6541-4565-8668-3be664426daa","Type":"ContainerDied","Data":"4b2a681dd1232d8b1ac289766a8ae10190d022b5b06eff832af9235bc3e8ff5a"} Oct 02 12:05:26 crc kubenswrapper[4725]: I1002 12:05:26.487274 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5nwh" event={"ID":"8fdbe1a4-6541-4565-8668-3be664426daa","Type":"ContainerStarted","Data":"0b5416288e521bb9f9a972af2211f27e3237de8f6bb13ef7c73b672220631d69"} Oct 02 12:05:26 crc kubenswrapper[4725]: I1002 12:05:26.488587 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnqpc" event={"ID":"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d","Type":"ContainerStarted","Data":"04d01d4a5d99e316f30720c1a23af7ed53496b7d5191d5e81a963c3d8465d42e"} Oct 02 12:05:27 crc kubenswrapper[4725]: I1002 12:05:27.502467 4725 generic.go:334] "Generic (PLEG): container finished" podID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" containerID="9747e1987df3249516d72ec26163cf9c9d896a49815b340e079291a5efb253e2" exitCode=0 Oct 02 12:05:27 crc kubenswrapper[4725]: I1002 12:05:27.502530 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnqpc" event={"ID":"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d","Type":"ContainerDied","Data":"9747e1987df3249516d72ec26163cf9c9d896a49815b340e079291a5efb253e2"} Oct 02 12:05:28 crc kubenswrapper[4725]: I1002 12:05:28.516885 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnqpc" event={"ID":"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d","Type":"ContainerStarted","Data":"9ecb40187687cff5202231de164edd92b65e746a87287bc4bbba70a37023156d"} Oct 02 12:05:28 crc kubenswrapper[4725]: I1002 12:05:28.521304 4725 generic.go:334] "Generic (PLEG): container finished" podID="8fdbe1a4-6541-4565-8668-3be664426daa" containerID="ad0d73d3ea7496cfdb319b8e31c664a451c8af36ad73ddb72abe8124823fe322" exitCode=0 Oct 02 12:05:28 crc kubenswrapper[4725]: I1002 12:05:28.521345 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5nwh" event={"ID":"8fdbe1a4-6541-4565-8668-3be664426daa","Type":"ContainerDied","Data":"ad0d73d3ea7496cfdb319b8e31c664a451c8af36ad73ddb72abe8124823fe322"} Oct 02 12:05:29 crc kubenswrapper[4725]: I1002 12:05:29.537245 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5nwh" event={"ID":"8fdbe1a4-6541-4565-8668-3be664426daa","Type":"ContainerStarted","Data":"69e81a18b942ca4d6d584d8c2e807236d0bdf8d5de42ff33a016360d55e0db44"} Oct 02 12:05:29 crc kubenswrapper[4725]: I1002 12:05:29.540209 4725 generic.go:334] "Generic (PLEG): container finished" podID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" containerID="9ecb40187687cff5202231de164edd92b65e746a87287bc4bbba70a37023156d" exitCode=0 Oct 02 12:05:29 crc kubenswrapper[4725]: I1002 12:05:29.540247 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnqpc" event={"ID":"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d","Type":"ContainerDied","Data":"9ecb40187687cff5202231de164edd92b65e746a87287bc4bbba70a37023156d"} Oct 02 12:05:29 crc kubenswrapper[4725]: I1002 12:05:29.561810 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p5nwh" podStartSLOduration=1.708856329 podStartE2EDuration="4.561788326s" podCreationTimestamp="2025-10-02 12:05:25 +0000 UTC" firstStartedPulling="2025-10-02 12:05:26.489223478 +0000 UTC m=+2246.396722941" lastFinishedPulling="2025-10-02 12:05:29.342155455 +0000 UTC m=+2249.249654938" observedRunningTime="2025-10-02 12:05:29.55525647 +0000 UTC m=+2249.462755933" watchObservedRunningTime="2025-10-02 12:05:29.561788326 +0000 UTC m=+2249.469287799" Oct 02 12:05:30 crc kubenswrapper[4725]: I1002 12:05:30.550978 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnqpc" event={"ID":"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d","Type":"ContainerStarted","Data":"e0be195d937583f1cce94b54fdc037e27072fdbddf6e34dbd32bcc24b1ad88e1"} Oct 02 12:05:30 crc kubenswrapper[4725]: I1002 12:05:30.572663 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cnqpc" podStartSLOduration=3.088317929 podStartE2EDuration="5.572647208s" podCreationTimestamp="2025-10-02 12:05:25 +0000 UTC" firstStartedPulling="2025-10-02 12:05:27.504468029 +0000 UTC m=+2247.411967502" lastFinishedPulling="2025-10-02 12:05:29.988797308 +0000 UTC m=+2249.896296781" observedRunningTime="2025-10-02 12:05:30.569813302 +0000 UTC m=+2250.477312775" watchObservedRunningTime="2025-10-02 12:05:30.572647208 +0000 UTC m=+2250.480146681" Oct 02 12:05:35 crc kubenswrapper[4725]: I1002 12:05:35.664570 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:35 crc kubenswrapper[4725]: I1002 12:05:35.666152 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:35 crc kubenswrapper[4725]: I1002 12:05:35.736259 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:35 crc kubenswrapper[4725]: I1002 12:05:35.850560 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:35 crc kubenswrapper[4725]: I1002 12:05:35.850618 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:35 crc kubenswrapper[4725]: I1002 12:05:35.898004 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:36 crc kubenswrapper[4725]: I1002 12:05:36.666837 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:36 crc kubenswrapper[4725]: I1002 12:05:36.676797 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:37 crc kubenswrapper[4725]: I1002 12:05:37.306647 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5nwh"] Oct 02 12:05:38 crc kubenswrapper[4725]: I1002 12:05:38.620060 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p5nwh" podUID="8fdbe1a4-6541-4565-8668-3be664426daa" containerName="registry-server" containerID="cri-o://69e81a18b942ca4d6d584d8c2e807236d0bdf8d5de42ff33a016360d55e0db44" gracePeriod=2 Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.102203 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cnqpc"] Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.102439 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cnqpc" podUID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" containerName="registry-server" containerID="cri-o://e0be195d937583f1cce94b54fdc037e27072fdbddf6e34dbd32bcc24b1ad88e1" gracePeriod=2 Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.637510 4725 generic.go:334] "Generic (PLEG): container finished" podID="8fdbe1a4-6541-4565-8668-3be664426daa" containerID="69e81a18b942ca4d6d584d8c2e807236d0bdf8d5de42ff33a016360d55e0db44" exitCode=0 Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.637975 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5nwh" event={"ID":"8fdbe1a4-6541-4565-8668-3be664426daa","Type":"ContainerDied","Data":"69e81a18b942ca4d6d584d8c2e807236d0bdf8d5de42ff33a016360d55e0db44"} Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.638017 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5nwh" event={"ID":"8fdbe1a4-6541-4565-8668-3be664426daa","Type":"ContainerDied","Data":"0b5416288e521bb9f9a972af2211f27e3237de8f6bb13ef7c73b672220631d69"} Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.638036 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b5416288e521bb9f9a972af2211f27e3237de8f6bb13ef7c73b672220631d69" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.649879 4725 generic.go:334] "Generic (PLEG): container finished" podID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" containerID="e0be195d937583f1cce94b54fdc037e27072fdbddf6e34dbd32bcc24b1ad88e1" exitCode=0 Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.649929 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnqpc" event={"ID":"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d","Type":"ContainerDied","Data":"e0be195d937583f1cce94b54fdc037e27072fdbddf6e34dbd32bcc24b1ad88e1"} Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.649965 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnqpc" event={"ID":"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d","Type":"ContainerDied","Data":"04d01d4a5d99e316f30720c1a23af7ed53496b7d5191d5e81a963c3d8465d42e"} Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.649976 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d01d4a5d99e316f30720c1a23af7ed53496b7d5191d5e81a963c3d8465d42e" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.683834 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.690975 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.707920 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-catalog-content\") pod \"8fdbe1a4-6541-4565-8668-3be664426daa\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.707962 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-utilities\") pod \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.707996 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzsdb\" (UniqueName: \"kubernetes.io/projected/8fdbe1a4-6541-4565-8668-3be664426daa-kube-api-access-tzsdb\") pod \"8fdbe1a4-6541-4565-8668-3be664426daa\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.708039 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-catalog-content\") pod \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.708053 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t2qf\" (UniqueName: \"kubernetes.io/projected/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-kube-api-access-5t2qf\") pod \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\" (UID: \"a77a326e-7b0b-4d62-81f6-b15e7cd27d2d\") " Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.708081 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-utilities\") pod \"8fdbe1a4-6541-4565-8668-3be664426daa\" (UID: \"8fdbe1a4-6541-4565-8668-3be664426daa\") " Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.709355 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-utilities" (OuterVolumeSpecName: "utilities") pod "a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" (UID: "a77a326e-7b0b-4d62-81f6-b15e7cd27d2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.709704 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-utilities" (OuterVolumeSpecName: "utilities") pod "8fdbe1a4-6541-4565-8668-3be664426daa" (UID: "8fdbe1a4-6541-4565-8668-3be664426daa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.724554 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fdbe1a4-6541-4565-8668-3be664426daa-kube-api-access-tzsdb" (OuterVolumeSpecName: "kube-api-access-tzsdb") pod "8fdbe1a4-6541-4565-8668-3be664426daa" (UID: "8fdbe1a4-6541-4565-8668-3be664426daa"). InnerVolumeSpecName "kube-api-access-tzsdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.727601 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-kube-api-access-5t2qf" (OuterVolumeSpecName: "kube-api-access-5t2qf") pod "a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" (UID: "a77a326e-7b0b-4d62-81f6-b15e7cd27d2d"). InnerVolumeSpecName "kube-api-access-5t2qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.762834 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" (UID: "a77a326e-7b0b-4d62-81f6-b15e7cd27d2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.773389 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fdbe1a4-6541-4565-8668-3be664426daa" (UID: "8fdbe1a4-6541-4565-8668-3be664426daa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.810500 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.810539 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.810552 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzsdb\" (UniqueName: \"kubernetes.io/projected/8fdbe1a4-6541-4565-8668-3be664426daa-kube-api-access-tzsdb\") on node \"crc\" DevicePath \"\"" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.810594 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.810604 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t2qf\" (UniqueName: \"kubernetes.io/projected/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d-kube-api-access-5t2qf\") on node \"crc\" DevicePath \"\"" Oct 02 12:05:39 crc kubenswrapper[4725]: I1002 12:05:39.810613 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdbe1a4-6541-4565-8668-3be664426daa-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:05:40 crc kubenswrapper[4725]: I1002 12:05:40.663073 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5nwh" Oct 02 12:05:40 crc kubenswrapper[4725]: I1002 12:05:40.663122 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnqpc" Oct 02 12:05:40 crc kubenswrapper[4725]: I1002 12:05:40.716184 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5nwh"] Oct 02 12:05:40 crc kubenswrapper[4725]: I1002 12:05:40.728161 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p5nwh"] Oct 02 12:05:40 crc kubenswrapper[4725]: I1002 12:05:40.735485 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cnqpc"] Oct 02 12:05:40 crc kubenswrapper[4725]: I1002 12:05:40.743452 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cnqpc"] Oct 02 12:05:41 crc kubenswrapper[4725]: I1002 12:05:41.285500 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fdbe1a4-6541-4565-8668-3be664426daa" path="/var/lib/kubelet/pods/8fdbe1a4-6541-4565-8668-3be664426daa/volumes" Oct 02 12:05:41 crc kubenswrapper[4725]: I1002 12:05:41.286594 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" path="/var/lib/kubelet/pods/a77a326e-7b0b-4d62-81f6-b15e7cd27d2d/volumes" Oct 02 12:07:14 crc kubenswrapper[4725]: I1002 12:07:14.978988 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:07:14 crc kubenswrapper[4725]: I1002 12:07:14.979633 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:07:44 crc kubenswrapper[4725]: I1002 12:07:44.978052 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:07:44 crc kubenswrapper[4725]: I1002 12:07:44.978584 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:08:02 crc kubenswrapper[4725]: I1002 12:08:02.043265 4725 generic.go:334] "Generic (PLEG): container finished" podID="c525e8cd-4d87-4d2b-9d78-73199eebbbee" containerID="39742fbe451b66b1ff4d9d8de684d88daa5ae37449a7515460e29df33889a1fa" exitCode=0 Oct 02 12:08:02 crc kubenswrapper[4725]: I1002 12:08:02.043380 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" event={"ID":"c525e8cd-4d87-4d2b-9d78-73199eebbbee","Type":"ContainerDied","Data":"39742fbe451b66b1ff4d9d8de684d88daa5ae37449a7515460e29df33889a1fa"} Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.505359 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.628836 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-inventory\") pod \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.628913 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-ssh-key\") pod \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.628972 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-combined-ca-bundle\") pod \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.629012 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-secret-0\") pod \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.629037 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v5tp\" (UniqueName: \"kubernetes.io/projected/c525e8cd-4d87-4d2b-9d78-73199eebbbee-kube-api-access-4v5tp\") pod \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\" (UID: \"c525e8cd-4d87-4d2b-9d78-73199eebbbee\") " Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.634360 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c525e8cd-4d87-4d2b-9d78-73199eebbbee-kube-api-access-4v5tp" (OuterVolumeSpecName: "kube-api-access-4v5tp") pod "c525e8cd-4d87-4d2b-9d78-73199eebbbee" (UID: "c525e8cd-4d87-4d2b-9d78-73199eebbbee"). InnerVolumeSpecName "kube-api-access-4v5tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.636144 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c525e8cd-4d87-4d2b-9d78-73199eebbbee" (UID: "c525e8cd-4d87-4d2b-9d78-73199eebbbee"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.659035 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c525e8cd-4d87-4d2b-9d78-73199eebbbee" (UID: "c525e8cd-4d87-4d2b-9d78-73199eebbbee"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.661574 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-inventory" (OuterVolumeSpecName: "inventory") pod "c525e8cd-4d87-4d2b-9d78-73199eebbbee" (UID: "c525e8cd-4d87-4d2b-9d78-73199eebbbee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.678865 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c525e8cd-4d87-4d2b-9d78-73199eebbbee" (UID: "c525e8cd-4d87-4d2b-9d78-73199eebbbee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.731248 4725 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.731294 4725 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.731306 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v5tp\" (UniqueName: \"kubernetes.io/projected/c525e8cd-4d87-4d2b-9d78-73199eebbbee-kube-api-access-4v5tp\") on node \"crc\" DevicePath \"\"" Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.731317 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 12:08:03 crc kubenswrapper[4725]: I1002 12:08:03.731328 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c525e8cd-4d87-4d2b-9d78-73199eebbbee-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.072008 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" event={"ID":"c525e8cd-4d87-4d2b-9d78-73199eebbbee","Type":"ContainerDied","Data":"4d4dd1056878ea9949abd7583cc0c61ca2f601dc7b3617b08b6cc7309875cc6b"} Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.072048 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d4dd1056878ea9949abd7583cc0c61ca2f601dc7b3617b08b6cc7309875cc6b" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.072085 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.176472 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6"] Oct 02 12:08:04 crc kubenswrapper[4725]: E1002 12:08:04.176980 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" containerName="registry-server" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.177003 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" containerName="registry-server" Oct 02 12:08:04 crc kubenswrapper[4725]: E1002 12:08:04.177021 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fdbe1a4-6541-4565-8668-3be664426daa" containerName="registry-server" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.177027 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fdbe1a4-6541-4565-8668-3be664426daa" containerName="registry-server" Oct 02 12:08:04 crc kubenswrapper[4725]: E1002 12:08:04.177045 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fdbe1a4-6541-4565-8668-3be664426daa" containerName="extract-utilities" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.177051 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fdbe1a4-6541-4565-8668-3be664426daa" containerName="extract-utilities" Oct 02 12:08:04 crc kubenswrapper[4725]: E1002 12:08:04.177062 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" containerName="extract-utilities" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.177068 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" containerName="extract-utilities" Oct 02 12:08:04 crc kubenswrapper[4725]: E1002 12:08:04.177081 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fdbe1a4-6541-4565-8668-3be664426daa" containerName="extract-content" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.177086 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fdbe1a4-6541-4565-8668-3be664426daa" containerName="extract-content" Oct 02 12:08:04 crc kubenswrapper[4725]: E1002 12:08:04.177100 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" containerName="extract-content" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.177108 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" containerName="extract-content" Oct 02 12:08:04 crc kubenswrapper[4725]: E1002 12:08:04.177118 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c525e8cd-4d87-4d2b-9d78-73199eebbbee" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.177125 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c525e8cd-4d87-4d2b-9d78-73199eebbbee" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.177298 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a77a326e-7b0b-4d62-81f6-b15e7cd27d2d" containerName="registry-server" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.177308 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c525e8cd-4d87-4d2b-9d78-73199eebbbee" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.177330 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fdbe1a4-6541-4565-8668-3be664426daa" containerName="registry-server" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.178054 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.180134 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.180134 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.180593 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.180679 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.180872 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.182311 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.182653 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.202163 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6"] Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.343140 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.343195 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.343242 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.343261 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.343281 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.343551 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.343792 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtsmr\" (UniqueName: \"kubernetes.io/projected/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-kube-api-access-xtsmr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.344110 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.344215 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.446834 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtsmr\" (UniqueName: \"kubernetes.io/projected/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-kube-api-access-xtsmr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.446954 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.447033 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.447126 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.447191 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.447281 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.447341 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.447383 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.447486 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.449884 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.453893 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.454814 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.454313 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.454742 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.456002 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.456016 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.457943 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.468989 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtsmr\" (UniqueName: \"kubernetes.io/projected/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-kube-api-access-xtsmr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jxhn6\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:04 crc kubenswrapper[4725]: I1002 12:08:04.498197 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:08:05 crc kubenswrapper[4725]: I1002 12:08:05.016738 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6"] Oct 02 12:08:05 crc kubenswrapper[4725]: I1002 12:08:05.018919 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 12:08:05 crc kubenswrapper[4725]: I1002 12:08:05.080790 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" event={"ID":"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72","Type":"ContainerStarted","Data":"33a64a876c18b951952f1be1bc717ca8144ee0bcf7779f043d854de887ad0ed7"} Oct 02 12:08:06 crc kubenswrapper[4725]: I1002 12:08:06.097557 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" event={"ID":"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72","Type":"ContainerStarted","Data":"c6f9a923198179e2ef578ba382974062147843f026503107689129a1059d5226"} Oct 02 12:08:06 crc kubenswrapper[4725]: I1002 12:08:06.129593 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" podStartSLOduration=1.561208011 podStartE2EDuration="2.129572239s" podCreationTimestamp="2025-10-02 12:08:04 +0000 UTC" firstStartedPulling="2025-10-02 12:08:05.018621006 +0000 UTC m=+2404.926120469" lastFinishedPulling="2025-10-02 12:08:05.586985194 +0000 UTC m=+2405.494484697" observedRunningTime="2025-10-02 12:08:06.124090971 +0000 UTC m=+2406.031590504" watchObservedRunningTime="2025-10-02 12:08:06.129572239 +0000 UTC m=+2406.037071702" Oct 02 12:08:14 crc kubenswrapper[4725]: I1002 12:08:14.978499 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:08:14 crc kubenswrapper[4725]: I1002 12:08:14.979176 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:08:14 crc kubenswrapper[4725]: I1002 12:08:14.979301 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 12:08:14 crc kubenswrapper[4725]: I1002 12:08:14.980520 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 12:08:14 crc kubenswrapper[4725]: I1002 12:08:14.980612 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" gracePeriod=600 Oct 02 12:08:15 crc kubenswrapper[4725]: E1002 12:08:15.107898 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:08:15 crc kubenswrapper[4725]: I1002 12:08:15.185638 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" exitCode=0 Oct 02 12:08:15 crc kubenswrapper[4725]: I1002 12:08:15.185698 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a"} Oct 02 12:08:15 crc kubenswrapper[4725]: I1002 12:08:15.185823 4725 scope.go:117] "RemoveContainer" containerID="2f1d66b556cbbfca3da1b591e3249893cafa7d6f694a715ba2777c2e8980f2f6" Oct 02 12:08:15 crc kubenswrapper[4725]: I1002 12:08:15.186657 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:08:15 crc kubenswrapper[4725]: E1002 12:08:15.187087 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:08:29 crc kubenswrapper[4725]: I1002 12:08:29.267715 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:08:29 crc kubenswrapper[4725]: E1002 12:08:29.268578 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:08:44 crc kubenswrapper[4725]: I1002 12:08:44.268415 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:08:44 crc kubenswrapper[4725]: E1002 12:08:44.269434 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:08:56 crc kubenswrapper[4725]: I1002 12:08:56.268529 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:08:56 crc kubenswrapper[4725]: E1002 12:08:56.270028 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:09:09 crc kubenswrapper[4725]: I1002 12:09:09.268381 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:09:09 crc kubenswrapper[4725]: E1002 12:09:09.269225 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:09:24 crc kubenswrapper[4725]: I1002 12:09:24.268812 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:09:24 crc kubenswrapper[4725]: E1002 12:09:24.269999 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:09:38 crc kubenswrapper[4725]: I1002 12:09:38.267629 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:09:38 crc kubenswrapper[4725]: E1002 12:09:38.268593 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:09:49 crc kubenswrapper[4725]: I1002 12:09:49.268485 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:09:49 crc kubenswrapper[4725]: E1002 12:09:49.269358 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:10:01 crc kubenswrapper[4725]: I1002 12:10:01.275185 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:10:01 crc kubenswrapper[4725]: E1002 12:10:01.276019 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:10:15 crc kubenswrapper[4725]: I1002 12:10:15.267807 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:10:15 crc kubenswrapper[4725]: E1002 12:10:15.268570 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:10:26 crc kubenswrapper[4725]: I1002 12:10:26.268592 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:10:26 crc kubenswrapper[4725]: E1002 12:10:26.269953 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:10:39 crc kubenswrapper[4725]: I1002 12:10:39.268602 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:10:39 crc kubenswrapper[4725]: E1002 12:10:39.269738 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.351353 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t4mzn"] Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.354267 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.376484 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4mzn"] Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.488550 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-catalog-content\") pod \"redhat-marketplace-t4mzn\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.488764 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-utilities\") pod \"redhat-marketplace-t4mzn\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.488937 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2s5k\" (UniqueName: \"kubernetes.io/projected/e11939b8-a963-4413-811e-5e0c19604f89-kube-api-access-n2s5k\") pod \"redhat-marketplace-t4mzn\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.590767 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-catalog-content\") pod \"redhat-marketplace-t4mzn\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.590852 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-utilities\") pod \"redhat-marketplace-t4mzn\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.590917 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2s5k\" (UniqueName: \"kubernetes.io/projected/e11939b8-a963-4413-811e-5e0c19604f89-kube-api-access-n2s5k\") pod \"redhat-marketplace-t4mzn\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.591392 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-utilities\") pod \"redhat-marketplace-t4mzn\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.591677 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-catalog-content\") pod \"redhat-marketplace-t4mzn\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.616115 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2s5k\" (UniqueName: \"kubernetes.io/projected/e11939b8-a963-4413-811e-5e0c19604f89-kube-api-access-n2s5k\") pod \"redhat-marketplace-t4mzn\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:41 crc kubenswrapper[4725]: I1002 12:10:41.679789 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:42 crc kubenswrapper[4725]: I1002 12:10:42.124872 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4mzn"] Oct 02 12:10:42 crc kubenswrapper[4725]: I1002 12:10:42.636255 4725 generic.go:334] "Generic (PLEG): container finished" podID="e11939b8-a963-4413-811e-5e0c19604f89" containerID="b64c7f8e7d92d17e1247fd6aaf0910327ef081a2723b1ae6d2fd0b1740284ade" exitCode=0 Oct 02 12:10:42 crc kubenswrapper[4725]: I1002 12:10:42.637282 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4mzn" event={"ID":"e11939b8-a963-4413-811e-5e0c19604f89","Type":"ContainerDied","Data":"b64c7f8e7d92d17e1247fd6aaf0910327ef081a2723b1ae6d2fd0b1740284ade"} Oct 02 12:10:42 crc kubenswrapper[4725]: I1002 12:10:42.637364 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4mzn" event={"ID":"e11939b8-a963-4413-811e-5e0c19604f89","Type":"ContainerStarted","Data":"a787c370eed058b33760c97a85cc47885cb98f57c7b279596fe8aaf9e1a77a85"} Oct 02 12:10:43 crc kubenswrapper[4725]: I1002 12:10:43.647143 4725 generic.go:334] "Generic (PLEG): container finished" podID="e11939b8-a963-4413-811e-5e0c19604f89" containerID="387ec3a97e4b637980d53110395d0a88c6d89d98400ec88ed63bdbdbafb92f6c" exitCode=0 Oct 02 12:10:43 crc kubenswrapper[4725]: I1002 12:10:43.647219 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4mzn" event={"ID":"e11939b8-a963-4413-811e-5e0c19604f89","Type":"ContainerDied","Data":"387ec3a97e4b637980d53110395d0a88c6d89d98400ec88ed63bdbdbafb92f6c"} Oct 02 12:10:44 crc kubenswrapper[4725]: I1002 12:10:44.661286 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4mzn" event={"ID":"e11939b8-a963-4413-811e-5e0c19604f89","Type":"ContainerStarted","Data":"82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c"} Oct 02 12:10:44 crc kubenswrapper[4725]: I1002 12:10:44.682986 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t4mzn" podStartSLOduration=2.23441558 podStartE2EDuration="3.682967746s" podCreationTimestamp="2025-10-02 12:10:41 +0000 UTC" firstStartedPulling="2025-10-02 12:10:42.638912047 +0000 UTC m=+2562.546411510" lastFinishedPulling="2025-10-02 12:10:44.087464213 +0000 UTC m=+2563.994963676" observedRunningTime="2025-10-02 12:10:44.680464238 +0000 UTC m=+2564.587963711" watchObservedRunningTime="2025-10-02 12:10:44.682967746 +0000 UTC m=+2564.590467219" Oct 02 12:10:51 crc kubenswrapper[4725]: I1002 12:10:51.681016 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:51 crc kubenswrapper[4725]: I1002 12:10:51.681604 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:51 crc kubenswrapper[4725]: I1002 12:10:51.736192 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:51 crc kubenswrapper[4725]: I1002 12:10:51.784114 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:51 crc kubenswrapper[4725]: I1002 12:10:51.990913 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4mzn"] Oct 02 12:10:52 crc kubenswrapper[4725]: I1002 12:10:52.269326 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:10:52 crc kubenswrapper[4725]: E1002 12:10:52.269808 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:10:53 crc kubenswrapper[4725]: I1002 12:10:53.750631 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t4mzn" podUID="e11939b8-a963-4413-811e-5e0c19604f89" containerName="registry-server" containerID="cri-o://82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c" gracePeriod=2 Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.265640 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.341482 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2s5k\" (UniqueName: \"kubernetes.io/projected/e11939b8-a963-4413-811e-5e0c19604f89-kube-api-access-n2s5k\") pod \"e11939b8-a963-4413-811e-5e0c19604f89\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.341634 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-catalog-content\") pod \"e11939b8-a963-4413-811e-5e0c19604f89\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.341843 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-utilities\") pod \"e11939b8-a963-4413-811e-5e0c19604f89\" (UID: \"e11939b8-a963-4413-811e-5e0c19604f89\") " Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.342763 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-utilities" (OuterVolumeSpecName: "utilities") pod "e11939b8-a963-4413-811e-5e0c19604f89" (UID: "e11939b8-a963-4413-811e-5e0c19604f89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.348996 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11939b8-a963-4413-811e-5e0c19604f89-kube-api-access-n2s5k" (OuterVolumeSpecName: "kube-api-access-n2s5k") pod "e11939b8-a963-4413-811e-5e0c19604f89" (UID: "e11939b8-a963-4413-811e-5e0c19604f89"). InnerVolumeSpecName "kube-api-access-n2s5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.356504 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e11939b8-a963-4413-811e-5e0c19604f89" (UID: "e11939b8-a963-4413-811e-5e0c19604f89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.444628 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.444674 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2s5k\" (UniqueName: \"kubernetes.io/projected/e11939b8-a963-4413-811e-5e0c19604f89-kube-api-access-n2s5k\") on node \"crc\" DevicePath \"\"" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.444689 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11939b8-a963-4413-811e-5e0c19604f89-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.766529 4725 generic.go:334] "Generic (PLEG): container finished" podID="e11939b8-a963-4413-811e-5e0c19604f89" containerID="82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c" exitCode=0 Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.766581 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4mzn" event={"ID":"e11939b8-a963-4413-811e-5e0c19604f89","Type":"ContainerDied","Data":"82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c"} Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.766588 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4mzn" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.766611 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4mzn" event={"ID":"e11939b8-a963-4413-811e-5e0c19604f89","Type":"ContainerDied","Data":"a787c370eed058b33760c97a85cc47885cb98f57c7b279596fe8aaf9e1a77a85"} Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.766633 4725 scope.go:117] "RemoveContainer" containerID="82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.797063 4725 scope.go:117] "RemoveContainer" containerID="387ec3a97e4b637980d53110395d0a88c6d89d98400ec88ed63bdbdbafb92f6c" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.803905 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4mzn"] Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.808849 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4mzn"] Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.822489 4725 scope.go:117] "RemoveContainer" containerID="b64c7f8e7d92d17e1247fd6aaf0910327ef081a2723b1ae6d2fd0b1740284ade" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.881023 4725 scope.go:117] "RemoveContainer" containerID="82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c" Oct 02 12:10:54 crc kubenswrapper[4725]: E1002 12:10:54.886057 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c\": container with ID starting with 82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c not found: ID does not exist" containerID="82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.886117 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c"} err="failed to get container status \"82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c\": rpc error: code = NotFound desc = could not find container \"82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c\": container with ID starting with 82f6da25fa7554c1743ef15280853764c7acd496a0aea9bff6dd6a5225565c6c not found: ID does not exist" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.886150 4725 scope.go:117] "RemoveContainer" containerID="387ec3a97e4b637980d53110395d0a88c6d89d98400ec88ed63bdbdbafb92f6c" Oct 02 12:10:54 crc kubenswrapper[4725]: E1002 12:10:54.897413 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"387ec3a97e4b637980d53110395d0a88c6d89d98400ec88ed63bdbdbafb92f6c\": container with ID starting with 387ec3a97e4b637980d53110395d0a88c6d89d98400ec88ed63bdbdbafb92f6c not found: ID does not exist" containerID="387ec3a97e4b637980d53110395d0a88c6d89d98400ec88ed63bdbdbafb92f6c" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.897461 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"387ec3a97e4b637980d53110395d0a88c6d89d98400ec88ed63bdbdbafb92f6c"} err="failed to get container status \"387ec3a97e4b637980d53110395d0a88c6d89d98400ec88ed63bdbdbafb92f6c\": rpc error: code = NotFound desc = could not find container \"387ec3a97e4b637980d53110395d0a88c6d89d98400ec88ed63bdbdbafb92f6c\": container with ID starting with 387ec3a97e4b637980d53110395d0a88c6d89d98400ec88ed63bdbdbafb92f6c not found: ID does not exist" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.897489 4725 scope.go:117] "RemoveContainer" containerID="b64c7f8e7d92d17e1247fd6aaf0910327ef081a2723b1ae6d2fd0b1740284ade" Oct 02 12:10:54 crc kubenswrapper[4725]: E1002 12:10:54.899172 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b64c7f8e7d92d17e1247fd6aaf0910327ef081a2723b1ae6d2fd0b1740284ade\": container with ID starting with b64c7f8e7d92d17e1247fd6aaf0910327ef081a2723b1ae6d2fd0b1740284ade not found: ID does not exist" containerID="b64c7f8e7d92d17e1247fd6aaf0910327ef081a2723b1ae6d2fd0b1740284ade" Oct 02 12:10:54 crc kubenswrapper[4725]: I1002 12:10:54.899237 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b64c7f8e7d92d17e1247fd6aaf0910327ef081a2723b1ae6d2fd0b1740284ade"} err="failed to get container status \"b64c7f8e7d92d17e1247fd6aaf0910327ef081a2723b1ae6d2fd0b1740284ade\": rpc error: code = NotFound desc = could not find container \"b64c7f8e7d92d17e1247fd6aaf0910327ef081a2723b1ae6d2fd0b1740284ade\": container with ID starting with b64c7f8e7d92d17e1247fd6aaf0910327ef081a2723b1ae6d2fd0b1740284ade not found: ID does not exist" Oct 02 12:10:55 crc kubenswrapper[4725]: I1002 12:10:55.282533 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e11939b8-a963-4413-811e-5e0c19604f89" path="/var/lib/kubelet/pods/e11939b8-a963-4413-811e-5e0c19604f89/volumes" Oct 02 12:11:04 crc kubenswrapper[4725]: I1002 12:11:04.268219 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:11:04 crc kubenswrapper[4725]: E1002 12:11:04.269077 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:11:11 crc kubenswrapper[4725]: I1002 12:11:11.965192 4725 generic.go:334] "Generic (PLEG): container finished" podID="e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" containerID="c6f9a923198179e2ef578ba382974062147843f026503107689129a1059d5226" exitCode=0 Oct 02 12:11:11 crc kubenswrapper[4725]: I1002 12:11:11.965244 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" event={"ID":"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72","Type":"ContainerDied","Data":"c6f9a923198179e2ef578ba382974062147843f026503107689129a1059d5226"} Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.438578 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.555515 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-ssh-key\") pod \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.555580 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-inventory\") pod \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.555611 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-1\") pod \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.555644 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-1\") pod \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.555718 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-extra-config-0\") pod \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.555976 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtsmr\" (UniqueName: \"kubernetes.io/projected/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-kube-api-access-xtsmr\") pod \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.556008 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-0\") pod \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.556058 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-combined-ca-bundle\") pod \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.556089 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-0\") pod \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\" (UID: \"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72\") " Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.563735 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-kube-api-access-xtsmr" (OuterVolumeSpecName: "kube-api-access-xtsmr") pod "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" (UID: "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72"). InnerVolumeSpecName "kube-api-access-xtsmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.566956 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" (UID: "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.587153 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" (UID: "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.591761 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" (UID: "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.594694 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" (UID: "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.595499 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-inventory" (OuterVolumeSpecName: "inventory") pod "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" (UID: "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.596281 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" (UID: "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.598857 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" (UID: "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.608301 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" (UID: "e1ff53ac-ab56-4b7f-99b1-4ed1c299af72"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.657578 4725 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.657850 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtsmr\" (UniqueName: \"kubernetes.io/projected/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-kube-api-access-xtsmr\") on node \"crc\" DevicePath \"\"" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.657861 4725 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.657871 4725 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.657879 4725 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.657890 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.657899 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.657908 4725 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.657916 4725 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1ff53ac-ab56-4b7f-99b1-4ed1c299af72-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.984253 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" event={"ID":"e1ff53ac-ab56-4b7f-99b1-4ed1c299af72","Type":"ContainerDied","Data":"33a64a876c18b951952f1be1bc717ca8144ee0bcf7779f043d854de887ad0ed7"} Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.984301 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a64a876c18b951952f1be1bc717ca8144ee0bcf7779f043d854de887ad0ed7" Oct 02 12:11:13 crc kubenswrapper[4725]: I1002 12:11:13.984336 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jxhn6" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.106151 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j"] Oct 02 12:11:14 crc kubenswrapper[4725]: E1002 12:11:14.106611 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11939b8-a963-4413-811e-5e0c19604f89" containerName="extract-content" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.106628 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11939b8-a963-4413-811e-5e0c19604f89" containerName="extract-content" Oct 02 12:11:14 crc kubenswrapper[4725]: E1002 12:11:14.106644 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11939b8-a963-4413-811e-5e0c19604f89" containerName="registry-server" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.106650 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11939b8-a963-4413-811e-5e0c19604f89" containerName="registry-server" Oct 02 12:11:14 crc kubenswrapper[4725]: E1002 12:11:14.106666 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.106673 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 02 12:11:14 crc kubenswrapper[4725]: E1002 12:11:14.106691 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11939b8-a963-4413-811e-5e0c19604f89" containerName="extract-utilities" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.106697 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11939b8-a963-4413-811e-5e0c19604f89" containerName="extract-utilities" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.106925 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ff53ac-ab56-4b7f-99b1-4ed1c299af72" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.106959 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e11939b8-a963-4413-811e-5e0c19604f89" containerName="registry-server" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.107619 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.109986 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.110478 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.110907 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4hrzx" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.111414 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.114117 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.116264 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j"] Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.171263 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.171310 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.171345 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.171452 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.171481 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft9g2\" (UniqueName: \"kubernetes.io/projected/b3aedfea-069e-4211-878c-b85e0bb9d3ac-kube-api-access-ft9g2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.171503 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.171664 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.272958 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.273021 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft9g2\" (UniqueName: \"kubernetes.io/projected/b3aedfea-069e-4211-878c-b85e0bb9d3ac-kube-api-access-ft9g2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.273060 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.273116 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.273254 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.273295 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.273346 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.277402 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.277487 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.277513 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.278950 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.279301 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.288205 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.289252 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft9g2\" (UniqueName: \"kubernetes.io/projected/b3aedfea-069e-4211-878c-b85e0bb9d3ac-kube-api-access-ft9g2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-csk6j\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.434639 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:11:14 crc kubenswrapper[4725]: I1002 12:11:14.982645 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j"] Oct 02 12:11:16 crc kubenswrapper[4725]: I1002 12:11:16.002821 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" event={"ID":"b3aedfea-069e-4211-878c-b85e0bb9d3ac","Type":"ContainerStarted","Data":"1a9d874b2fe4adf5bb21d171518905d02171ea6ea0f2afaff1f6d9d2af4bb893"} Oct 02 12:11:16 crc kubenswrapper[4725]: I1002 12:11:16.003291 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" event={"ID":"b3aedfea-069e-4211-878c-b85e0bb9d3ac","Type":"ContainerStarted","Data":"65e1388b8447d12bd24b17779fc08f8ba011bc4dd396df51abf3538d99c6c70a"} Oct 02 12:11:16 crc kubenswrapper[4725]: I1002 12:11:16.024114 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" podStartSLOduration=1.476448728 podStartE2EDuration="2.024096743s" podCreationTimestamp="2025-10-02 12:11:14 +0000 UTC" firstStartedPulling="2025-10-02 12:11:14.99258001 +0000 UTC m=+2594.900079473" lastFinishedPulling="2025-10-02 12:11:15.540227985 +0000 UTC m=+2595.447727488" observedRunningTime="2025-10-02 12:11:16.017772795 +0000 UTC m=+2595.925272258" watchObservedRunningTime="2025-10-02 12:11:16.024096743 +0000 UTC m=+2595.931596206" Oct 02 12:11:16 crc kubenswrapper[4725]: I1002 12:11:16.268845 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:11:16 crc kubenswrapper[4725]: E1002 12:11:16.269122 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:11:27 crc kubenswrapper[4725]: I1002 12:11:27.269654 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:11:27 crc kubenswrapper[4725]: E1002 12:11:27.270863 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:11:35 crc kubenswrapper[4725]: I1002 12:11:35.583008 4725 scope.go:117] "RemoveContainer" containerID="4b2a681dd1232d8b1ac289766a8ae10190d022b5b06eff832af9235bc3e8ff5a" Oct 02 12:11:35 crc kubenswrapper[4725]: I1002 12:11:35.609919 4725 scope.go:117] "RemoveContainer" containerID="e0be195d937583f1cce94b54fdc037e27072fdbddf6e34dbd32bcc24b1ad88e1" Oct 02 12:11:35 crc kubenswrapper[4725]: I1002 12:11:35.660356 4725 scope.go:117] "RemoveContainer" containerID="9747e1987df3249516d72ec26163cf9c9d896a49815b340e079291a5efb253e2" Oct 02 12:11:35 crc kubenswrapper[4725]: I1002 12:11:35.680422 4725 scope.go:117] "RemoveContainer" containerID="9ecb40187687cff5202231de164edd92b65e746a87287bc4bbba70a37023156d" Oct 02 12:11:35 crc kubenswrapper[4725]: I1002 12:11:35.727604 4725 scope.go:117] "RemoveContainer" containerID="ad0d73d3ea7496cfdb319b8e31c664a451c8af36ad73ddb72abe8124823fe322" Oct 02 12:11:35 crc kubenswrapper[4725]: I1002 12:11:35.783964 4725 scope.go:117] "RemoveContainer" containerID="69e81a18b942ca4d6d584d8c2e807236d0bdf8d5de42ff33a016360d55e0db44" Oct 02 12:11:38 crc kubenswrapper[4725]: I1002 12:11:38.268195 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:11:38 crc kubenswrapper[4725]: E1002 12:11:38.269068 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:11:51 crc kubenswrapper[4725]: I1002 12:11:51.273845 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:11:51 crc kubenswrapper[4725]: E1002 12:11:51.274632 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:12:06 crc kubenswrapper[4725]: I1002 12:12:06.268358 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:12:06 crc kubenswrapper[4725]: E1002 12:12:06.269099 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:12:17 crc kubenswrapper[4725]: I1002 12:12:17.269008 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:12:17 crc kubenswrapper[4725]: E1002 12:12:17.270913 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:12:30 crc kubenswrapper[4725]: I1002 12:12:30.268552 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:12:30 crc kubenswrapper[4725]: E1002 12:12:30.269309 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:12:44 crc kubenswrapper[4725]: I1002 12:12:44.267849 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:12:44 crc kubenswrapper[4725]: E1002 12:12:44.268554 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:12:59 crc kubenswrapper[4725]: I1002 12:12:59.268851 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:12:59 crc kubenswrapper[4725]: E1002 12:12:59.270068 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:13:11 crc kubenswrapper[4725]: I1002 12:13:11.276978 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:13:11 crc kubenswrapper[4725]: E1002 12:13:11.278438 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:13:24 crc kubenswrapper[4725]: I1002 12:13:24.268038 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:13:25 crc kubenswrapper[4725]: I1002 12:13:25.279844 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"3f3acab63bcf9691081310606c70e989ebf0b4fc9495f665611ba94983c80a40"} Oct 02 12:13:30 crc kubenswrapper[4725]: I1002 12:13:30.322228 4725 generic.go:334] "Generic (PLEG): container finished" podID="b3aedfea-069e-4211-878c-b85e0bb9d3ac" containerID="1a9d874b2fe4adf5bb21d171518905d02171ea6ea0f2afaff1f6d9d2af4bb893" exitCode=0 Oct 02 12:13:30 crc kubenswrapper[4725]: I1002 12:13:30.322308 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" event={"ID":"b3aedfea-069e-4211-878c-b85e0bb9d3ac","Type":"ContainerDied","Data":"1a9d874b2fe4adf5bb21d171518905d02171ea6ea0f2afaff1f6d9d2af4bb893"} Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.770039 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.835358 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-inventory\") pod \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.835466 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-0\") pod \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.836520 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft9g2\" (UniqueName: \"kubernetes.io/projected/b3aedfea-069e-4211-878c-b85e0bb9d3ac-kube-api-access-ft9g2\") pod \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.836634 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-1\") pod \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.836676 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-telemetry-combined-ca-bundle\") pod \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.836702 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-2\") pod \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.836779 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ssh-key\") pod \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\" (UID: \"b3aedfea-069e-4211-878c-b85e0bb9d3ac\") " Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.843992 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3aedfea-069e-4211-878c-b85e0bb9d3ac-kube-api-access-ft9g2" (OuterVolumeSpecName: "kube-api-access-ft9g2") pod "b3aedfea-069e-4211-878c-b85e0bb9d3ac" (UID: "b3aedfea-069e-4211-878c-b85e0bb9d3ac"). InnerVolumeSpecName "kube-api-access-ft9g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.846571 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b3aedfea-069e-4211-878c-b85e0bb9d3ac" (UID: "b3aedfea-069e-4211-878c-b85e0bb9d3ac"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.868000 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b3aedfea-069e-4211-878c-b85e0bb9d3ac" (UID: "b3aedfea-069e-4211-878c-b85e0bb9d3ac"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.871895 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b3aedfea-069e-4211-878c-b85e0bb9d3ac" (UID: "b3aedfea-069e-4211-878c-b85e0bb9d3ac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.878313 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b3aedfea-069e-4211-878c-b85e0bb9d3ac" (UID: "b3aedfea-069e-4211-878c-b85e0bb9d3ac"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.878349 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-inventory" (OuterVolumeSpecName: "inventory") pod "b3aedfea-069e-4211-878c-b85e0bb9d3ac" (UID: "b3aedfea-069e-4211-878c-b85e0bb9d3ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.878396 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b3aedfea-069e-4211-878c-b85e0bb9d3ac" (UID: "b3aedfea-069e-4211-878c-b85e0bb9d3ac"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.941307 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft9g2\" (UniqueName: \"kubernetes.io/projected/b3aedfea-069e-4211-878c-b85e0bb9d3ac-kube-api-access-ft9g2\") on node \"crc\" DevicePath \"\"" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.941351 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.941366 4725 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.941383 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.941396 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.941409 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 12:13:31 crc kubenswrapper[4725]: I1002 12:13:31.941422 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b3aedfea-069e-4211-878c-b85e0bb9d3ac-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 02 12:13:32 crc kubenswrapper[4725]: I1002 12:13:32.342964 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" event={"ID":"b3aedfea-069e-4211-878c-b85e0bb9d3ac","Type":"ContainerDied","Data":"65e1388b8447d12bd24b17779fc08f8ba011bc4dd396df51abf3538d99c6c70a"} Oct 02 12:13:32 crc kubenswrapper[4725]: I1002 12:13:32.343020 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e1388b8447d12bd24b17779fc08f8ba011bc4dd396df51abf3538d99c6c70a" Oct 02 12:13:32 crc kubenswrapper[4725]: I1002 12:13:32.343064 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-csk6j" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.631111 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wwltt"] Oct 02 12:14:05 crc kubenswrapper[4725]: E1002 12:14:05.632053 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3aedfea-069e-4211-878c-b85e0bb9d3ac" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.632067 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3aedfea-069e-4211-878c-b85e0bb9d3ac" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.632247 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3aedfea-069e-4211-878c-b85e0bb9d3ac" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.633697 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.664870 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwltt"] Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.750156 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-catalog-content\") pod \"redhat-operators-wwltt\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.750287 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gmrp\" (UniqueName: \"kubernetes.io/projected/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-kube-api-access-4gmrp\") pod \"redhat-operators-wwltt\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.750364 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-utilities\") pod \"redhat-operators-wwltt\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.852114 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-catalog-content\") pod \"redhat-operators-wwltt\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.852231 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gmrp\" (UniqueName: \"kubernetes.io/projected/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-kube-api-access-4gmrp\") pod \"redhat-operators-wwltt\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.852287 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-utilities\") pod \"redhat-operators-wwltt\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.852779 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-catalog-content\") pod \"redhat-operators-wwltt\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.852813 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-utilities\") pod \"redhat-operators-wwltt\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.869636 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gmrp\" (UniqueName: \"kubernetes.io/projected/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-kube-api-access-4gmrp\") pod \"redhat-operators-wwltt\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:05 crc kubenswrapper[4725]: I1002 12:14:05.964608 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:06 crc kubenswrapper[4725]: I1002 12:14:06.456009 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwltt"] Oct 02 12:14:06 crc kubenswrapper[4725]: I1002 12:14:06.688151 4725 generic.go:334] "Generic (PLEG): container finished" podID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" containerID="069c2c1a2cc50afc972d1e220df60d0de413aeb7fdd7bb25fcbb12ec73005ccc" exitCode=0 Oct 02 12:14:06 crc kubenswrapper[4725]: I1002 12:14:06.688204 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwltt" event={"ID":"950e83cf-cddd-4ba5-a74e-5c9bc81719a0","Type":"ContainerDied","Data":"069c2c1a2cc50afc972d1e220df60d0de413aeb7fdd7bb25fcbb12ec73005ccc"} Oct 02 12:14:06 crc kubenswrapper[4725]: I1002 12:14:06.688282 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwltt" event={"ID":"950e83cf-cddd-4ba5-a74e-5c9bc81719a0","Type":"ContainerStarted","Data":"0224dada97ba5ed26c88a097a5e06e96d291132070f9476de2397f2dbea992e0"} Oct 02 12:14:06 crc kubenswrapper[4725]: I1002 12:14:06.689979 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 12:14:08 crc kubenswrapper[4725]: I1002 12:14:08.707256 4725 generic.go:334] "Generic (PLEG): container finished" podID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" containerID="ed922e1c004522705478b681054a9f9d813e1e93d9d8694c53bcfe75ca4b34ee" exitCode=0 Oct 02 12:14:08 crc kubenswrapper[4725]: I1002 12:14:08.707362 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwltt" event={"ID":"950e83cf-cddd-4ba5-a74e-5c9bc81719a0","Type":"ContainerDied","Data":"ed922e1c004522705478b681054a9f9d813e1e93d9d8694c53bcfe75ca4b34ee"} Oct 02 12:14:09 crc kubenswrapper[4725]: I1002 12:14:09.722659 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwltt" event={"ID":"950e83cf-cddd-4ba5-a74e-5c9bc81719a0","Type":"ContainerStarted","Data":"f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f"} Oct 02 12:14:09 crc kubenswrapper[4725]: I1002 12:14:09.743092 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wwltt" podStartSLOduration=2.2704771790000002 podStartE2EDuration="4.743054547s" podCreationTimestamp="2025-10-02 12:14:05 +0000 UTC" firstStartedPulling="2025-10-02 12:14:06.689747392 +0000 UTC m=+2766.597246855" lastFinishedPulling="2025-10-02 12:14:09.16232476 +0000 UTC m=+2769.069824223" observedRunningTime="2025-10-02 12:14:09.739668117 +0000 UTC m=+2769.647167590" watchObservedRunningTime="2025-10-02 12:14:09.743054547 +0000 UTC m=+2769.650554010" Oct 02 12:14:15 crc kubenswrapper[4725]: I1002 12:14:15.965427 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:15 crc kubenswrapper[4725]: I1002 12:14:15.966553 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:16 crc kubenswrapper[4725]: I1002 12:14:16.020538 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:16 crc kubenswrapper[4725]: I1002 12:14:16.888009 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:16 crc kubenswrapper[4725]: I1002 12:14:16.942173 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwltt"] Oct 02 12:14:18 crc kubenswrapper[4725]: I1002 12:14:18.823361 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wwltt" podUID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" containerName="registry-server" containerID="cri-o://f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f" gracePeriod=2 Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.292180 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.436122 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-utilities\") pod \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.436253 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-catalog-content\") pod \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.436498 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gmrp\" (UniqueName: \"kubernetes.io/projected/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-kube-api-access-4gmrp\") pod \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\" (UID: \"950e83cf-cddd-4ba5-a74e-5c9bc81719a0\") " Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.437206 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-utilities" (OuterVolumeSpecName: "utilities") pod "950e83cf-cddd-4ba5-a74e-5c9bc81719a0" (UID: "950e83cf-cddd-4ba5-a74e-5c9bc81719a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.445938 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-kube-api-access-4gmrp" (OuterVolumeSpecName: "kube-api-access-4gmrp") pod "950e83cf-cddd-4ba5-a74e-5c9bc81719a0" (UID: "950e83cf-cddd-4ba5-a74e-5c9bc81719a0"). InnerVolumeSpecName "kube-api-access-4gmrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.538528 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gmrp\" (UniqueName: \"kubernetes.io/projected/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-kube-api-access-4gmrp\") on node \"crc\" DevicePath \"\"" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.538562 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.544942 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "950e83cf-cddd-4ba5-a74e-5c9bc81719a0" (UID: "950e83cf-cddd-4ba5-a74e-5c9bc81719a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.640689 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950e83cf-cddd-4ba5-a74e-5c9bc81719a0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.836627 4725 generic.go:334] "Generic (PLEG): container finished" podID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" containerID="f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f" exitCode=0 Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.836667 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwltt" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.836696 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwltt" event={"ID":"950e83cf-cddd-4ba5-a74e-5c9bc81719a0","Type":"ContainerDied","Data":"f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f"} Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.836758 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwltt" event={"ID":"950e83cf-cddd-4ba5-a74e-5c9bc81719a0","Type":"ContainerDied","Data":"0224dada97ba5ed26c88a097a5e06e96d291132070f9476de2397f2dbea992e0"} Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.836780 4725 scope.go:117] "RemoveContainer" containerID="f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.872982 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwltt"] Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.874499 4725 scope.go:117] "RemoveContainer" containerID="ed922e1c004522705478b681054a9f9d813e1e93d9d8694c53bcfe75ca4b34ee" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.880772 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wwltt"] Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.906272 4725 scope.go:117] "RemoveContainer" containerID="069c2c1a2cc50afc972d1e220df60d0de413aeb7fdd7bb25fcbb12ec73005ccc" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.963232 4725 scope.go:117] "RemoveContainer" containerID="f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f" Oct 02 12:14:19 crc kubenswrapper[4725]: E1002 12:14:19.963962 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f\": container with ID starting with f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f not found: ID does not exist" containerID="f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.964039 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f"} err="failed to get container status \"f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f\": rpc error: code = NotFound desc = could not find container \"f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f\": container with ID starting with f6a3d6994ad0c1c3d0648739f45a393776572836f2fbd445ebae330f0783ff0f not found: ID does not exist" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.964087 4725 scope.go:117] "RemoveContainer" containerID="ed922e1c004522705478b681054a9f9d813e1e93d9d8694c53bcfe75ca4b34ee" Oct 02 12:14:19 crc kubenswrapper[4725]: E1002 12:14:19.964712 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed922e1c004522705478b681054a9f9d813e1e93d9d8694c53bcfe75ca4b34ee\": container with ID starting with ed922e1c004522705478b681054a9f9d813e1e93d9d8694c53bcfe75ca4b34ee not found: ID does not exist" containerID="ed922e1c004522705478b681054a9f9d813e1e93d9d8694c53bcfe75ca4b34ee" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.964810 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed922e1c004522705478b681054a9f9d813e1e93d9d8694c53bcfe75ca4b34ee"} err="failed to get container status \"ed922e1c004522705478b681054a9f9d813e1e93d9d8694c53bcfe75ca4b34ee\": rpc error: code = NotFound desc = could not find container \"ed922e1c004522705478b681054a9f9d813e1e93d9d8694c53bcfe75ca4b34ee\": container with ID starting with ed922e1c004522705478b681054a9f9d813e1e93d9d8694c53bcfe75ca4b34ee not found: ID does not exist" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.964832 4725 scope.go:117] "RemoveContainer" containerID="069c2c1a2cc50afc972d1e220df60d0de413aeb7fdd7bb25fcbb12ec73005ccc" Oct 02 12:14:19 crc kubenswrapper[4725]: E1002 12:14:19.965196 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"069c2c1a2cc50afc972d1e220df60d0de413aeb7fdd7bb25fcbb12ec73005ccc\": container with ID starting with 069c2c1a2cc50afc972d1e220df60d0de413aeb7fdd7bb25fcbb12ec73005ccc not found: ID does not exist" containerID="069c2c1a2cc50afc972d1e220df60d0de413aeb7fdd7bb25fcbb12ec73005ccc" Oct 02 12:14:19 crc kubenswrapper[4725]: I1002 12:14:19.965251 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"069c2c1a2cc50afc972d1e220df60d0de413aeb7fdd7bb25fcbb12ec73005ccc"} err="failed to get container status \"069c2c1a2cc50afc972d1e220df60d0de413aeb7fdd7bb25fcbb12ec73005ccc\": rpc error: code = NotFound desc = could not find container \"069c2c1a2cc50afc972d1e220df60d0de413aeb7fdd7bb25fcbb12ec73005ccc\": container with ID starting with 069c2c1a2cc50afc972d1e220df60d0de413aeb7fdd7bb25fcbb12ec73005ccc not found: ID does not exist" Oct 02 12:14:21 crc kubenswrapper[4725]: I1002 12:14:21.280603 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" path="/var/lib/kubelet/pods/950e83cf-cddd-4ba5-a74e-5c9bc81719a0/volumes" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.642806 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Oct 02 12:14:29 crc kubenswrapper[4725]: E1002 12:14:29.644333 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" containerName="extract-content" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.644366 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" containerName="extract-content" Oct 02 12:14:29 crc kubenswrapper[4725]: E1002 12:14:29.644408 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" containerName="extract-utilities" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.644425 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" containerName="extract-utilities" Oct 02 12:14:29 crc kubenswrapper[4725]: E1002 12:14:29.644511 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" containerName="registry-server" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.644530 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" containerName="registry-server" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.645035 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="950e83cf-cddd-4ba5-a74e-5c9bc81719a0" containerName="registry-server" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.646422 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.648614 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.658318 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.671913 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-967nc" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.672293 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.673460 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.762619 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.762677 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwd9g\" (UniqueName: \"kubernetes.io/projected/0e4dadf0-6f31-4e9d-8590-5324693180b6-kube-api-access-gwd9g\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.762759 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.762841 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.762864 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.762956 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.763011 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.763037 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.763070 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.865257 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.865458 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.865522 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.865602 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.866131 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.867078 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.867114 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.867196 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwd9g\" (UniqueName: \"kubernetes.io/projected/0e4dadf0-6f31-4e9d-8590-5324693180b6-kube-api-access-gwd9g\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.867303 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.867426 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.867476 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.868418 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.870025 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.873855 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.875264 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.875718 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.878865 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.899854 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwd9g\" (UniqueName: \"kubernetes.io/projected/0e4dadf0-6f31-4e9d-8590-5324693180b6-kube-api-access-gwd9g\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.935200 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " pod="openstack/tempest-tests-tempest" Oct 02 12:14:29 crc kubenswrapper[4725]: I1002 12:14:29.993872 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 02 12:14:30 crc kubenswrapper[4725]: I1002 12:14:30.290219 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 02 12:14:30 crc kubenswrapper[4725]: I1002 12:14:30.961304 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e4dadf0-6f31-4e9d-8590-5324693180b6","Type":"ContainerStarted","Data":"d4b49af898710890196a5787718b580c65c165cb1a9288cdc54046fd87df919d"} Oct 02 12:14:55 crc kubenswrapper[4725]: E1002 12:14:55.781912 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Oct 02 12:14:55 crc kubenswrapper[4725]: E1002 12:14:55.783039 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwd9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(0e4dadf0-6f31-4e9d-8590-5324693180b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 12:14:55 crc kubenswrapper[4725]: E1002 12:14:55.784126 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="0e4dadf0-6f31-4e9d-8590-5324693180b6" Oct 02 12:14:56 crc kubenswrapper[4725]: E1002 12:14:56.208571 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="0e4dadf0-6f31-4e9d-8590-5324693180b6" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.161571 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t"] Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.163217 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.168197 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.168232 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.177593 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t"] Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.298268 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpmbr\" (UniqueName: \"kubernetes.io/projected/ed080b2f-b791-4256-8fb3-e4dac138a60a-kube-api-access-bpmbr\") pod \"collect-profiles-29323455-std5t\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.298451 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed080b2f-b791-4256-8fb3-e4dac138a60a-config-volume\") pod \"collect-profiles-29323455-std5t\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.298558 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed080b2f-b791-4256-8fb3-e4dac138a60a-secret-volume\") pod \"collect-profiles-29323455-std5t\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.401344 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpmbr\" (UniqueName: \"kubernetes.io/projected/ed080b2f-b791-4256-8fb3-e4dac138a60a-kube-api-access-bpmbr\") pod \"collect-profiles-29323455-std5t\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.401468 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed080b2f-b791-4256-8fb3-e4dac138a60a-config-volume\") pod \"collect-profiles-29323455-std5t\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.401539 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed080b2f-b791-4256-8fb3-e4dac138a60a-secret-volume\") pod \"collect-profiles-29323455-std5t\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.402754 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed080b2f-b791-4256-8fb3-e4dac138a60a-config-volume\") pod \"collect-profiles-29323455-std5t\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.409244 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed080b2f-b791-4256-8fb3-e4dac138a60a-secret-volume\") pod \"collect-profiles-29323455-std5t\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.432258 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpmbr\" (UniqueName: \"kubernetes.io/projected/ed080b2f-b791-4256-8fb3-e4dac138a60a-kube-api-access-bpmbr\") pod \"collect-profiles-29323455-std5t\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:00 crc kubenswrapper[4725]: I1002 12:15:00.492075 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:01 crc kubenswrapper[4725]: I1002 12:15:01.050580 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t"] Oct 02 12:15:01 crc kubenswrapper[4725]: I1002 12:15:01.255461 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" event={"ID":"ed080b2f-b791-4256-8fb3-e4dac138a60a","Type":"ContainerStarted","Data":"f3effd865db90c0e4e06592b6ca3799974cbfa0a4d618e0effefe0b932c3c1a1"} Oct 02 12:15:02 crc kubenswrapper[4725]: I1002 12:15:02.269970 4725 generic.go:334] "Generic (PLEG): container finished" podID="ed080b2f-b791-4256-8fb3-e4dac138a60a" containerID="932638448b743c2b391997d13567fedad34ea21727997e550ef9f33acbc318ce" exitCode=0 Oct 02 12:15:02 crc kubenswrapper[4725]: I1002 12:15:02.270096 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" event={"ID":"ed080b2f-b791-4256-8fb3-e4dac138a60a","Type":"ContainerDied","Data":"932638448b743c2b391997d13567fedad34ea21727997e550ef9f33acbc318ce"} Oct 02 12:15:03 crc kubenswrapper[4725]: I1002 12:15:03.702404 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:03 crc kubenswrapper[4725]: I1002 12:15:03.771473 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed080b2f-b791-4256-8fb3-e4dac138a60a-secret-volume\") pod \"ed080b2f-b791-4256-8fb3-e4dac138a60a\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " Oct 02 12:15:03 crc kubenswrapper[4725]: I1002 12:15:03.771653 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed080b2f-b791-4256-8fb3-e4dac138a60a-config-volume\") pod \"ed080b2f-b791-4256-8fb3-e4dac138a60a\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " Oct 02 12:15:03 crc kubenswrapper[4725]: I1002 12:15:03.771780 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpmbr\" (UniqueName: \"kubernetes.io/projected/ed080b2f-b791-4256-8fb3-e4dac138a60a-kube-api-access-bpmbr\") pod \"ed080b2f-b791-4256-8fb3-e4dac138a60a\" (UID: \"ed080b2f-b791-4256-8fb3-e4dac138a60a\") " Oct 02 12:15:03 crc kubenswrapper[4725]: I1002 12:15:03.772462 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed080b2f-b791-4256-8fb3-e4dac138a60a-config-volume" (OuterVolumeSpecName: "config-volume") pod "ed080b2f-b791-4256-8fb3-e4dac138a60a" (UID: "ed080b2f-b791-4256-8fb3-e4dac138a60a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 12:15:03 crc kubenswrapper[4725]: I1002 12:15:03.777787 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed080b2f-b791-4256-8fb3-e4dac138a60a-kube-api-access-bpmbr" (OuterVolumeSpecName: "kube-api-access-bpmbr") pod "ed080b2f-b791-4256-8fb3-e4dac138a60a" (UID: "ed080b2f-b791-4256-8fb3-e4dac138a60a"). InnerVolumeSpecName "kube-api-access-bpmbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:15:03 crc kubenswrapper[4725]: I1002 12:15:03.779438 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed080b2f-b791-4256-8fb3-e4dac138a60a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ed080b2f-b791-4256-8fb3-e4dac138a60a" (UID: "ed080b2f-b791-4256-8fb3-e4dac138a60a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:15:03 crc kubenswrapper[4725]: I1002 12:15:03.874523 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpmbr\" (UniqueName: \"kubernetes.io/projected/ed080b2f-b791-4256-8fb3-e4dac138a60a-kube-api-access-bpmbr\") on node \"crc\" DevicePath \"\"" Oct 02 12:15:03 crc kubenswrapper[4725]: I1002 12:15:03.874556 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed080b2f-b791-4256-8fb3-e4dac138a60a-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 12:15:03 crc kubenswrapper[4725]: I1002 12:15:03.874567 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed080b2f-b791-4256-8fb3-e4dac138a60a-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 12:15:04 crc kubenswrapper[4725]: I1002 12:15:04.302991 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" event={"ID":"ed080b2f-b791-4256-8fb3-e4dac138a60a","Type":"ContainerDied","Data":"f3effd865db90c0e4e06592b6ca3799974cbfa0a4d618e0effefe0b932c3c1a1"} Oct 02 12:15:04 crc kubenswrapper[4725]: I1002 12:15:04.303358 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3effd865db90c0e4e06592b6ca3799974cbfa0a4d618e0effefe0b932c3c1a1" Oct 02 12:15:04 crc kubenswrapper[4725]: I1002 12:15:04.303092 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323455-std5t" Oct 02 12:15:04 crc kubenswrapper[4725]: I1002 12:15:04.777024 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck"] Oct 02 12:15:04 crc kubenswrapper[4725]: I1002 12:15:04.792348 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323410-c7zck"] Oct 02 12:15:05 crc kubenswrapper[4725]: I1002 12:15:05.295178 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="538365c3-9ba5-4fbd-aca1-05525f5a3250" path="/var/lib/kubelet/pods/538365c3-9ba5-4fbd-aca1-05525f5a3250/volumes" Oct 02 12:15:08 crc kubenswrapper[4725]: I1002 12:15:08.120824 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 02 12:15:09 crc kubenswrapper[4725]: I1002 12:15:09.373599 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e4dadf0-6f31-4e9d-8590-5324693180b6","Type":"ContainerStarted","Data":"b039c63dc9d35e98738a8c80b2f1363d310205ed0a46a6925404d976e47e76e6"} Oct 02 12:15:09 crc kubenswrapper[4725]: I1002 12:15:09.397112 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.576412808 podStartE2EDuration="41.397084614s" podCreationTimestamp="2025-10-02 12:14:28 +0000 UTC" firstStartedPulling="2025-10-02 12:14:30.297638909 +0000 UTC m=+2790.205138392" lastFinishedPulling="2025-10-02 12:15:08.118310715 +0000 UTC m=+2828.025810198" observedRunningTime="2025-10-02 12:15:09.39131431 +0000 UTC m=+2829.298813803" watchObservedRunningTime="2025-10-02 12:15:09.397084614 +0000 UTC m=+2829.304584117" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.586964 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jhl5t"] Oct 02 12:15:29 crc kubenswrapper[4725]: E1002 12:15:29.588218 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed080b2f-b791-4256-8fb3-e4dac138a60a" containerName="collect-profiles" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.588242 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed080b2f-b791-4256-8fb3-e4dac138a60a" containerName="collect-profiles" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.588626 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed080b2f-b791-4256-8fb3-e4dac138a60a" containerName="collect-profiles" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.592997 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.619162 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhl5t"] Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.692653 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlglf\" (UniqueName: \"kubernetes.io/projected/24d76a76-fcea-45c7-937b-d9a39ba2787e-kube-api-access-qlglf\") pod \"community-operators-jhl5t\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.692833 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-catalog-content\") pod \"community-operators-jhl5t\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.692975 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-utilities\") pod \"community-operators-jhl5t\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.794771 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlglf\" (UniqueName: \"kubernetes.io/projected/24d76a76-fcea-45c7-937b-d9a39ba2787e-kube-api-access-qlglf\") pod \"community-operators-jhl5t\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.794834 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-catalog-content\") pod \"community-operators-jhl5t\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.794894 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-utilities\") pod \"community-operators-jhl5t\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.795385 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-utilities\") pod \"community-operators-jhl5t\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.795673 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-catalog-content\") pod \"community-operators-jhl5t\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.822495 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlglf\" (UniqueName: \"kubernetes.io/projected/24d76a76-fcea-45c7-937b-d9a39ba2787e-kube-api-access-qlglf\") pod \"community-operators-jhl5t\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:29 crc kubenswrapper[4725]: I1002 12:15:29.921275 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:30 crc kubenswrapper[4725]: I1002 12:15:30.417228 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhl5t"] Oct 02 12:15:30 crc kubenswrapper[4725]: W1002 12:15:30.417549 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24d76a76_fcea_45c7_937b_d9a39ba2787e.slice/crio-1c029edccc3525aaa5fa0a843aa60dfaf119f1a6a184df440ca83a51c74924cd WatchSource:0}: Error finding container 1c029edccc3525aaa5fa0a843aa60dfaf119f1a6a184df440ca83a51c74924cd: Status 404 returned error can't find the container with id 1c029edccc3525aaa5fa0a843aa60dfaf119f1a6a184df440ca83a51c74924cd Oct 02 12:15:30 crc kubenswrapper[4725]: I1002 12:15:30.586203 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhl5t" event={"ID":"24d76a76-fcea-45c7-937b-d9a39ba2787e","Type":"ContainerStarted","Data":"1c029edccc3525aaa5fa0a843aa60dfaf119f1a6a184df440ca83a51c74924cd"} Oct 02 12:15:31 crc kubenswrapper[4725]: I1002 12:15:31.605441 4725 generic.go:334] "Generic (PLEG): container finished" podID="24d76a76-fcea-45c7-937b-d9a39ba2787e" containerID="731331268fda0a291d385e1532bf66f0f38ba3af4e7c29f7a20b0dad5bc9f967" exitCode=0 Oct 02 12:15:31 crc kubenswrapper[4725]: I1002 12:15:31.605877 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhl5t" event={"ID":"24d76a76-fcea-45c7-937b-d9a39ba2787e","Type":"ContainerDied","Data":"731331268fda0a291d385e1532bf66f0f38ba3af4e7c29f7a20b0dad5bc9f967"} Oct 02 12:15:31 crc kubenswrapper[4725]: I1002 12:15:31.985069 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z9hnw"] Oct 02 12:15:31 crc kubenswrapper[4725]: I1002 12:15:31.989307 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.009917 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z9hnw"] Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.149047 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-utilities\") pod \"certified-operators-z9hnw\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.149090 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-catalog-content\") pod \"certified-operators-z9hnw\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.149122 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w528\" (UniqueName: \"kubernetes.io/projected/c4489772-0c83-464a-a548-7cb3f65d460e-kube-api-access-9w528\") pod \"certified-operators-z9hnw\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.251584 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-utilities\") pod \"certified-operators-z9hnw\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.252030 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-catalog-content\") pod \"certified-operators-z9hnw\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.252147 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w528\" (UniqueName: \"kubernetes.io/projected/c4489772-0c83-464a-a548-7cb3f65d460e-kube-api-access-9w528\") pod \"certified-operators-z9hnw\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.252172 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-utilities\") pod \"certified-operators-z9hnw\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.252586 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-catalog-content\") pod \"certified-operators-z9hnw\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.287712 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w528\" (UniqueName: \"kubernetes.io/projected/c4489772-0c83-464a-a548-7cb3f65d460e-kube-api-access-9w528\") pod \"certified-operators-z9hnw\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.328283 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:32 crc kubenswrapper[4725]: W1002 12:15:32.903836 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4489772_0c83_464a_a548_7cb3f65d460e.slice/crio-9e5fd067d4dc605d2d675772e8ea79d1fc7ea457911275a007b0a35e0d6ed4ab WatchSource:0}: Error finding container 9e5fd067d4dc605d2d675772e8ea79d1fc7ea457911275a007b0a35e0d6ed4ab: Status 404 returned error can't find the container with id 9e5fd067d4dc605d2d675772e8ea79d1fc7ea457911275a007b0a35e0d6ed4ab Oct 02 12:15:32 crc kubenswrapper[4725]: I1002 12:15:32.907547 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z9hnw"] Oct 02 12:15:33 crc kubenswrapper[4725]: I1002 12:15:33.624966 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9hnw" event={"ID":"c4489772-0c83-464a-a548-7cb3f65d460e","Type":"ContainerStarted","Data":"9e5fd067d4dc605d2d675772e8ea79d1fc7ea457911275a007b0a35e0d6ed4ab"} Oct 02 12:15:34 crc kubenswrapper[4725]: I1002 12:15:34.646436 4725 generic.go:334] "Generic (PLEG): container finished" podID="c4489772-0c83-464a-a548-7cb3f65d460e" containerID="5a9cfef873b539b489e2c2f304507fd77db0eb61bc17b95bc8f05655acb73765" exitCode=0 Oct 02 12:15:34 crc kubenswrapper[4725]: I1002 12:15:34.646502 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9hnw" event={"ID":"c4489772-0c83-464a-a548-7cb3f65d460e","Type":"ContainerDied","Data":"5a9cfef873b539b489e2c2f304507fd77db0eb61bc17b95bc8f05655acb73765"} Oct 02 12:15:34 crc kubenswrapper[4725]: I1002 12:15:34.651963 4725 generic.go:334] "Generic (PLEG): container finished" podID="24d76a76-fcea-45c7-937b-d9a39ba2787e" containerID="792473c11af12a37f923c3e539446da959ccb624e800fcd9d7e58d01754cd31f" exitCode=0 Oct 02 12:15:34 crc kubenswrapper[4725]: I1002 12:15:34.652037 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhl5t" event={"ID":"24d76a76-fcea-45c7-937b-d9a39ba2787e","Type":"ContainerDied","Data":"792473c11af12a37f923c3e539446da959ccb624e800fcd9d7e58d01754cd31f"} Oct 02 12:15:36 crc kubenswrapper[4725]: I1002 12:15:36.683835 4725 generic.go:334] "Generic (PLEG): container finished" podID="c4489772-0c83-464a-a548-7cb3f65d460e" containerID="7047ddf95319429f11f65cbfe444722b921c86390acecc5b9391dfdabd19355c" exitCode=0 Oct 02 12:15:36 crc kubenswrapper[4725]: I1002 12:15:36.684375 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9hnw" event={"ID":"c4489772-0c83-464a-a548-7cb3f65d460e","Type":"ContainerDied","Data":"7047ddf95319429f11f65cbfe444722b921c86390acecc5b9391dfdabd19355c"} Oct 02 12:15:36 crc kubenswrapper[4725]: I1002 12:15:36.835468 4725 scope.go:117] "RemoveContainer" containerID="931b89d04215239ada58d3db71b9d01cc5495fb373692a14928b165dc693e147" Oct 02 12:15:37 crc kubenswrapper[4725]: I1002 12:15:37.722419 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9hnw" event={"ID":"c4489772-0c83-464a-a548-7cb3f65d460e","Type":"ContainerStarted","Data":"b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01"} Oct 02 12:15:37 crc kubenswrapper[4725]: I1002 12:15:37.727810 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhl5t" event={"ID":"24d76a76-fcea-45c7-937b-d9a39ba2787e","Type":"ContainerStarted","Data":"d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124"} Oct 02 12:15:37 crc kubenswrapper[4725]: I1002 12:15:37.757518 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z9hnw" podStartSLOduration=4.246256918 podStartE2EDuration="6.757493698s" podCreationTimestamp="2025-10-02 12:15:31 +0000 UTC" firstStartedPulling="2025-10-02 12:15:34.650215975 +0000 UTC m=+2854.557715438" lastFinishedPulling="2025-10-02 12:15:37.161452755 +0000 UTC m=+2857.068952218" observedRunningTime="2025-10-02 12:15:37.751681222 +0000 UTC m=+2857.659180705" watchObservedRunningTime="2025-10-02 12:15:37.757493698 +0000 UTC m=+2857.664993181" Oct 02 12:15:37 crc kubenswrapper[4725]: I1002 12:15:37.777184 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jhl5t" podStartSLOduration=3.41140371 podStartE2EDuration="8.777163515s" podCreationTimestamp="2025-10-02 12:15:29 +0000 UTC" firstStartedPulling="2025-10-02 12:15:31.608208933 +0000 UTC m=+2851.515708396" lastFinishedPulling="2025-10-02 12:15:36.973968748 +0000 UTC m=+2856.881468201" observedRunningTime="2025-10-02 12:15:37.76989139 +0000 UTC m=+2857.677390863" watchObservedRunningTime="2025-10-02 12:15:37.777163515 +0000 UTC m=+2857.684662998" Oct 02 12:15:39 crc kubenswrapper[4725]: I1002 12:15:39.923263 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:39 crc kubenswrapper[4725]: I1002 12:15:39.923513 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:39 crc kubenswrapper[4725]: I1002 12:15:39.987684 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:42 crc kubenswrapper[4725]: I1002 12:15:42.329621 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:42 crc kubenswrapper[4725]: I1002 12:15:42.330087 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:42 crc kubenswrapper[4725]: I1002 12:15:42.408610 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:42 crc kubenswrapper[4725]: I1002 12:15:42.859194 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:43 crc kubenswrapper[4725]: I1002 12:15:43.759447 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z9hnw"] Oct 02 12:15:44 crc kubenswrapper[4725]: I1002 12:15:44.799532 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z9hnw" podUID="c4489772-0c83-464a-a548-7cb3f65d460e" containerName="registry-server" containerID="cri-o://b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01" gracePeriod=2 Oct 02 12:15:44 crc kubenswrapper[4725]: I1002 12:15:44.978121 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:15:44 crc kubenswrapper[4725]: I1002 12:15:44.978434 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.305197 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.473399 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w528\" (UniqueName: \"kubernetes.io/projected/c4489772-0c83-464a-a548-7cb3f65d460e-kube-api-access-9w528\") pod \"c4489772-0c83-464a-a548-7cb3f65d460e\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.473546 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-utilities\") pod \"c4489772-0c83-464a-a548-7cb3f65d460e\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.473714 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-catalog-content\") pod \"c4489772-0c83-464a-a548-7cb3f65d460e\" (UID: \"c4489772-0c83-464a-a548-7cb3f65d460e\") " Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.474748 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-utilities" (OuterVolumeSpecName: "utilities") pod "c4489772-0c83-464a-a548-7cb3f65d460e" (UID: "c4489772-0c83-464a-a548-7cb3f65d460e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.479741 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4489772-0c83-464a-a548-7cb3f65d460e-kube-api-access-9w528" (OuterVolumeSpecName: "kube-api-access-9w528") pod "c4489772-0c83-464a-a548-7cb3f65d460e" (UID: "c4489772-0c83-464a-a548-7cb3f65d460e"). InnerVolumeSpecName "kube-api-access-9w528". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.527470 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4489772-0c83-464a-a548-7cb3f65d460e" (UID: "c4489772-0c83-464a-a548-7cb3f65d460e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.577176 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.577233 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4489772-0c83-464a-a548-7cb3f65d460e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.577250 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w528\" (UniqueName: \"kubernetes.io/projected/c4489772-0c83-464a-a548-7cb3f65d460e-kube-api-access-9w528\") on node \"crc\" DevicePath \"\"" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.811362 4725 generic.go:334] "Generic (PLEG): container finished" podID="c4489772-0c83-464a-a548-7cb3f65d460e" containerID="b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01" exitCode=0 Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.811406 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9hnw" event={"ID":"c4489772-0c83-464a-a548-7cb3f65d460e","Type":"ContainerDied","Data":"b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01"} Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.811432 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9hnw" event={"ID":"c4489772-0c83-464a-a548-7cb3f65d460e","Type":"ContainerDied","Data":"9e5fd067d4dc605d2d675772e8ea79d1fc7ea457911275a007b0a35e0d6ed4ab"} Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.811471 4725 scope.go:117] "RemoveContainer" containerID="b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.811596 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9hnw" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.856601 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z9hnw"] Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.861141 4725 scope.go:117] "RemoveContainer" containerID="7047ddf95319429f11f65cbfe444722b921c86390acecc5b9391dfdabd19355c" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.865213 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z9hnw"] Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.887990 4725 scope.go:117] "RemoveContainer" containerID="5a9cfef873b539b489e2c2f304507fd77db0eb61bc17b95bc8f05655acb73765" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.953539 4725 scope.go:117] "RemoveContainer" containerID="b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01" Oct 02 12:15:45 crc kubenswrapper[4725]: E1002 12:15:45.954543 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01\": container with ID starting with b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01 not found: ID does not exist" containerID="b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.954619 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01"} err="failed to get container status \"b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01\": rpc error: code = NotFound desc = could not find container \"b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01\": container with ID starting with b4c882aa06623ddcd7302499a8a2c4ea4e77dd316fe4bb477277fe4ef59a1f01 not found: ID does not exist" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.954647 4725 scope.go:117] "RemoveContainer" containerID="7047ddf95319429f11f65cbfe444722b921c86390acecc5b9391dfdabd19355c" Oct 02 12:15:45 crc kubenswrapper[4725]: E1002 12:15:45.955069 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7047ddf95319429f11f65cbfe444722b921c86390acecc5b9391dfdabd19355c\": container with ID starting with 7047ddf95319429f11f65cbfe444722b921c86390acecc5b9391dfdabd19355c not found: ID does not exist" containerID="7047ddf95319429f11f65cbfe444722b921c86390acecc5b9391dfdabd19355c" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.955097 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7047ddf95319429f11f65cbfe444722b921c86390acecc5b9391dfdabd19355c"} err="failed to get container status \"7047ddf95319429f11f65cbfe444722b921c86390acecc5b9391dfdabd19355c\": rpc error: code = NotFound desc = could not find container \"7047ddf95319429f11f65cbfe444722b921c86390acecc5b9391dfdabd19355c\": container with ID starting with 7047ddf95319429f11f65cbfe444722b921c86390acecc5b9391dfdabd19355c not found: ID does not exist" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.955111 4725 scope.go:117] "RemoveContainer" containerID="5a9cfef873b539b489e2c2f304507fd77db0eb61bc17b95bc8f05655acb73765" Oct 02 12:15:45 crc kubenswrapper[4725]: E1002 12:15:45.955352 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9cfef873b539b489e2c2f304507fd77db0eb61bc17b95bc8f05655acb73765\": container with ID starting with 5a9cfef873b539b489e2c2f304507fd77db0eb61bc17b95bc8f05655acb73765 not found: ID does not exist" containerID="5a9cfef873b539b489e2c2f304507fd77db0eb61bc17b95bc8f05655acb73765" Oct 02 12:15:45 crc kubenswrapper[4725]: I1002 12:15:45.955372 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9cfef873b539b489e2c2f304507fd77db0eb61bc17b95bc8f05655acb73765"} err="failed to get container status \"5a9cfef873b539b489e2c2f304507fd77db0eb61bc17b95bc8f05655acb73765\": rpc error: code = NotFound desc = could not find container \"5a9cfef873b539b489e2c2f304507fd77db0eb61bc17b95bc8f05655acb73765\": container with ID starting with 5a9cfef873b539b489e2c2f304507fd77db0eb61bc17b95bc8f05655acb73765 not found: ID does not exist" Oct 02 12:15:47 crc kubenswrapper[4725]: I1002 12:15:47.278221 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4489772-0c83-464a-a548-7cb3f65d460e" path="/var/lib/kubelet/pods/c4489772-0c83-464a-a548-7cb3f65d460e/volumes" Oct 02 12:15:49 crc kubenswrapper[4725]: I1002 12:15:49.963379 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:50 crc kubenswrapper[4725]: I1002 12:15:50.005406 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jhl5t"] Oct 02 12:15:50 crc kubenswrapper[4725]: I1002 12:15:50.865664 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jhl5t" podUID="24d76a76-fcea-45c7-937b-d9a39ba2787e" containerName="registry-server" containerID="cri-o://d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124" gracePeriod=2 Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.367094 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.511169 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlglf\" (UniqueName: \"kubernetes.io/projected/24d76a76-fcea-45c7-937b-d9a39ba2787e-kube-api-access-qlglf\") pod \"24d76a76-fcea-45c7-937b-d9a39ba2787e\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.511840 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-utilities\") pod \"24d76a76-fcea-45c7-937b-d9a39ba2787e\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.511868 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-catalog-content\") pod \"24d76a76-fcea-45c7-937b-d9a39ba2787e\" (UID: \"24d76a76-fcea-45c7-937b-d9a39ba2787e\") " Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.513007 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-utilities" (OuterVolumeSpecName: "utilities") pod "24d76a76-fcea-45c7-937b-d9a39ba2787e" (UID: "24d76a76-fcea-45c7-937b-d9a39ba2787e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.521058 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d76a76-fcea-45c7-937b-d9a39ba2787e-kube-api-access-qlglf" (OuterVolumeSpecName: "kube-api-access-qlglf") pod "24d76a76-fcea-45c7-937b-d9a39ba2787e" (UID: "24d76a76-fcea-45c7-937b-d9a39ba2787e"). InnerVolumeSpecName "kube-api-access-qlglf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.582524 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24d76a76-fcea-45c7-937b-d9a39ba2787e" (UID: "24d76a76-fcea-45c7-937b-d9a39ba2787e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.613916 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlglf\" (UniqueName: \"kubernetes.io/projected/24d76a76-fcea-45c7-937b-d9a39ba2787e-kube-api-access-qlglf\") on node \"crc\" DevicePath \"\"" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.613959 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.613969 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d76a76-fcea-45c7-937b-d9a39ba2787e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.883364 4725 generic.go:334] "Generic (PLEG): container finished" podID="24d76a76-fcea-45c7-937b-d9a39ba2787e" containerID="d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124" exitCode=0 Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.883411 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhl5t" event={"ID":"24d76a76-fcea-45c7-937b-d9a39ba2787e","Type":"ContainerDied","Data":"d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124"} Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.883438 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhl5t" event={"ID":"24d76a76-fcea-45c7-937b-d9a39ba2787e","Type":"ContainerDied","Data":"1c029edccc3525aaa5fa0a843aa60dfaf119f1a6a184df440ca83a51c74924cd"} Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.883454 4725 scope.go:117] "RemoveContainer" containerID="d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.883591 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhl5t" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.918853 4725 scope.go:117] "RemoveContainer" containerID="792473c11af12a37f923c3e539446da959ccb624e800fcd9d7e58d01754cd31f" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.929106 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jhl5t"] Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.942633 4725 scope.go:117] "RemoveContainer" containerID="731331268fda0a291d385e1532bf66f0f38ba3af4e7c29f7a20b0dad5bc9f967" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.956199 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jhl5t"] Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.996484 4725 scope.go:117] "RemoveContainer" containerID="d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124" Oct 02 12:15:51 crc kubenswrapper[4725]: E1002 12:15:51.997203 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124\": container with ID starting with d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124 not found: ID does not exist" containerID="d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.997263 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124"} err="failed to get container status \"d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124\": rpc error: code = NotFound desc = could not find container \"d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124\": container with ID starting with d9cb0e2770f53d5f862c2afd1a8774b1b7c9c43154422c96ee726766db940124 not found: ID does not exist" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.997308 4725 scope.go:117] "RemoveContainer" containerID="792473c11af12a37f923c3e539446da959ccb624e800fcd9d7e58d01754cd31f" Oct 02 12:15:51 crc kubenswrapper[4725]: E1002 12:15:51.997979 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"792473c11af12a37f923c3e539446da959ccb624e800fcd9d7e58d01754cd31f\": container with ID starting with 792473c11af12a37f923c3e539446da959ccb624e800fcd9d7e58d01754cd31f not found: ID does not exist" containerID="792473c11af12a37f923c3e539446da959ccb624e800fcd9d7e58d01754cd31f" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.998039 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792473c11af12a37f923c3e539446da959ccb624e800fcd9d7e58d01754cd31f"} err="failed to get container status \"792473c11af12a37f923c3e539446da959ccb624e800fcd9d7e58d01754cd31f\": rpc error: code = NotFound desc = could not find container \"792473c11af12a37f923c3e539446da959ccb624e800fcd9d7e58d01754cd31f\": container with ID starting with 792473c11af12a37f923c3e539446da959ccb624e800fcd9d7e58d01754cd31f not found: ID does not exist" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.998071 4725 scope.go:117] "RemoveContainer" containerID="731331268fda0a291d385e1532bf66f0f38ba3af4e7c29f7a20b0dad5bc9f967" Oct 02 12:15:51 crc kubenswrapper[4725]: E1002 12:15:51.998446 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"731331268fda0a291d385e1532bf66f0f38ba3af4e7c29f7a20b0dad5bc9f967\": container with ID starting with 731331268fda0a291d385e1532bf66f0f38ba3af4e7c29f7a20b0dad5bc9f967 not found: ID does not exist" containerID="731331268fda0a291d385e1532bf66f0f38ba3af4e7c29f7a20b0dad5bc9f967" Oct 02 12:15:51 crc kubenswrapper[4725]: I1002 12:15:51.998491 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731331268fda0a291d385e1532bf66f0f38ba3af4e7c29f7a20b0dad5bc9f967"} err="failed to get container status \"731331268fda0a291d385e1532bf66f0f38ba3af4e7c29f7a20b0dad5bc9f967\": rpc error: code = NotFound desc = could not find container \"731331268fda0a291d385e1532bf66f0f38ba3af4e7c29f7a20b0dad5bc9f967\": container with ID starting with 731331268fda0a291d385e1532bf66f0f38ba3af4e7c29f7a20b0dad5bc9f967 not found: ID does not exist" Oct 02 12:15:53 crc kubenswrapper[4725]: I1002 12:15:53.289514 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d76a76-fcea-45c7-937b-d9a39ba2787e" path="/var/lib/kubelet/pods/24d76a76-fcea-45c7-937b-d9a39ba2787e/volumes" Oct 02 12:16:14 crc kubenswrapper[4725]: I1002 12:16:14.978108 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:16:14 crc kubenswrapper[4725]: I1002 12:16:14.978843 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:16:44 crc kubenswrapper[4725]: I1002 12:16:44.979003 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:16:44 crc kubenswrapper[4725]: I1002 12:16:44.979440 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:16:44 crc kubenswrapper[4725]: I1002 12:16:44.979498 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 12:16:44 crc kubenswrapper[4725]: I1002 12:16:44.980282 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f3acab63bcf9691081310606c70e989ebf0b4fc9495f665611ba94983c80a40"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 12:16:44 crc kubenswrapper[4725]: I1002 12:16:44.980341 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://3f3acab63bcf9691081310606c70e989ebf0b4fc9495f665611ba94983c80a40" gracePeriod=600 Oct 02 12:16:45 crc kubenswrapper[4725]: I1002 12:16:45.450831 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="3f3acab63bcf9691081310606c70e989ebf0b4fc9495f665611ba94983c80a40" exitCode=0 Oct 02 12:16:45 crc kubenswrapper[4725]: I1002 12:16:45.450920 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"3f3acab63bcf9691081310606c70e989ebf0b4fc9495f665611ba94983c80a40"} Oct 02 12:16:45 crc kubenswrapper[4725]: I1002 12:16:45.451104 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491"} Oct 02 12:16:45 crc kubenswrapper[4725]: I1002 12:16:45.451141 4725 scope.go:117] "RemoveContainer" containerID="5eba52977e368fc2e71671ff27558f046f36a446063758d44eff60ff3fd8277a" Oct 02 12:19:14 crc kubenswrapper[4725]: I1002 12:19:14.978056 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:19:14 crc kubenswrapper[4725]: I1002 12:19:14.978818 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:19:44 crc kubenswrapper[4725]: I1002 12:19:44.978351 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:19:44 crc kubenswrapper[4725]: I1002 12:19:44.979005 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:20:14 crc kubenswrapper[4725]: I1002 12:20:14.978080 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:20:14 crc kubenswrapper[4725]: I1002 12:20:14.978749 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:20:14 crc kubenswrapper[4725]: I1002 12:20:14.978815 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 12:20:14 crc kubenswrapper[4725]: I1002 12:20:14.979838 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 12:20:14 crc kubenswrapper[4725]: I1002 12:20:14.979941 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" gracePeriod=600 Oct 02 12:20:15 crc kubenswrapper[4725]: E1002 12:20:15.108219 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:20:15 crc kubenswrapper[4725]: I1002 12:20:15.635469 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" exitCode=0 Oct 02 12:20:15 crc kubenswrapper[4725]: I1002 12:20:15.635515 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491"} Oct 02 12:20:15 crc kubenswrapper[4725]: I1002 12:20:15.635613 4725 scope.go:117] "RemoveContainer" containerID="3f3acab63bcf9691081310606c70e989ebf0b4fc9495f665611ba94983c80a40" Oct 02 12:20:15 crc kubenswrapper[4725]: I1002 12:20:15.636553 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:20:15 crc kubenswrapper[4725]: E1002 12:20:15.637198 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:20:30 crc kubenswrapper[4725]: I1002 12:20:30.269052 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:20:30 crc kubenswrapper[4725]: E1002 12:20:30.269965 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:20:42 crc kubenswrapper[4725]: I1002 12:20:42.268325 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:20:42 crc kubenswrapper[4725]: E1002 12:20:42.269180 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:20:55 crc kubenswrapper[4725]: I1002 12:20:55.269170 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:20:55 crc kubenswrapper[4725]: E1002 12:20:55.270045 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:21:10 crc kubenswrapper[4725]: I1002 12:21:10.268329 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:21:10 crc kubenswrapper[4725]: E1002 12:21:10.269552 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:21:24 crc kubenswrapper[4725]: I1002 12:21:24.268712 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:21:24 crc kubenswrapper[4725]: E1002 12:21:24.269569 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:21:36 crc kubenswrapper[4725]: I1002 12:21:36.268957 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:21:36 crc kubenswrapper[4725]: E1002 12:21:36.269717 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:21:49 crc kubenswrapper[4725]: I1002 12:21:49.268289 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:21:49 crc kubenswrapper[4725]: E1002 12:21:49.269392 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:22:04 crc kubenswrapper[4725]: I1002 12:22:04.268388 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:22:04 crc kubenswrapper[4725]: E1002 12:22:04.269466 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:22:16 crc kubenswrapper[4725]: I1002 12:22:16.268616 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:22:16 crc kubenswrapper[4725]: E1002 12:22:16.269654 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:22:27 crc kubenswrapper[4725]: I1002 12:22:27.268108 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:22:27 crc kubenswrapper[4725]: E1002 12:22:27.268871 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:22:42 crc kubenswrapper[4725]: I1002 12:22:42.268663 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:22:42 crc kubenswrapper[4725]: E1002 12:22:42.269360 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.167170 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fllwf"] Oct 02 12:22:48 crc kubenswrapper[4725]: E1002 12:22:48.168295 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d76a76-fcea-45c7-937b-d9a39ba2787e" containerName="registry-server" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.168320 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d76a76-fcea-45c7-937b-d9a39ba2787e" containerName="registry-server" Oct 02 12:22:48 crc kubenswrapper[4725]: E1002 12:22:48.168360 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4489772-0c83-464a-a548-7cb3f65d460e" containerName="extract-content" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.168371 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4489772-0c83-464a-a548-7cb3f65d460e" containerName="extract-content" Oct 02 12:22:48 crc kubenswrapper[4725]: E1002 12:22:48.168394 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4489772-0c83-464a-a548-7cb3f65d460e" containerName="registry-server" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.168406 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4489772-0c83-464a-a548-7cb3f65d460e" containerName="registry-server" Oct 02 12:22:48 crc kubenswrapper[4725]: E1002 12:22:48.168426 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4489772-0c83-464a-a548-7cb3f65d460e" containerName="extract-utilities" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.168437 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4489772-0c83-464a-a548-7cb3f65d460e" containerName="extract-utilities" Oct 02 12:22:48 crc kubenswrapper[4725]: E1002 12:22:48.168482 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d76a76-fcea-45c7-937b-d9a39ba2787e" containerName="extract-content" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.168492 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d76a76-fcea-45c7-937b-d9a39ba2787e" containerName="extract-content" Oct 02 12:22:48 crc kubenswrapper[4725]: E1002 12:22:48.168515 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d76a76-fcea-45c7-937b-d9a39ba2787e" containerName="extract-utilities" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.168526 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d76a76-fcea-45c7-937b-d9a39ba2787e" containerName="extract-utilities" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.168857 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4489772-0c83-464a-a548-7cb3f65d460e" containerName="registry-server" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.168903 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d76a76-fcea-45c7-937b-d9a39ba2787e" containerName="registry-server" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.171356 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.189907 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fllwf"] Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.244657 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-utilities\") pod \"redhat-marketplace-fllwf\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.244749 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-catalog-content\") pod \"redhat-marketplace-fllwf\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.244790 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdb8j\" (UniqueName: \"kubernetes.io/projected/07643745-2402-4da5-b88d-37ff35602cbe-kube-api-access-kdb8j\") pod \"redhat-marketplace-fllwf\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.347047 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-utilities\") pod \"redhat-marketplace-fllwf\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.347112 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-catalog-content\") pod \"redhat-marketplace-fllwf\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.347159 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdb8j\" (UniqueName: \"kubernetes.io/projected/07643745-2402-4da5-b88d-37ff35602cbe-kube-api-access-kdb8j\") pod \"redhat-marketplace-fllwf\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.348938 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-utilities\") pod \"redhat-marketplace-fllwf\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.349307 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-catalog-content\") pod \"redhat-marketplace-fllwf\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.375553 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdb8j\" (UniqueName: \"kubernetes.io/projected/07643745-2402-4da5-b88d-37ff35602cbe-kube-api-access-kdb8j\") pod \"redhat-marketplace-fllwf\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.551825 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:48 crc kubenswrapper[4725]: I1002 12:22:48.998131 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fllwf"] Oct 02 12:22:49 crc kubenswrapper[4725]: W1002 12:22:49.001168 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07643745_2402_4da5_b88d_37ff35602cbe.slice/crio-5d2fdba7575980f998f2c65f376922b4da70c24f4f1149a74afbb160447120b6 WatchSource:0}: Error finding container 5d2fdba7575980f998f2c65f376922b4da70c24f4f1149a74afbb160447120b6: Status 404 returned error can't find the container with id 5d2fdba7575980f998f2c65f376922b4da70c24f4f1149a74afbb160447120b6 Oct 02 12:22:49 crc kubenswrapper[4725]: I1002 12:22:49.120787 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fllwf" event={"ID":"07643745-2402-4da5-b88d-37ff35602cbe","Type":"ContainerStarted","Data":"5d2fdba7575980f998f2c65f376922b4da70c24f4f1149a74afbb160447120b6"} Oct 02 12:22:50 crc kubenswrapper[4725]: I1002 12:22:50.132465 4725 generic.go:334] "Generic (PLEG): container finished" podID="07643745-2402-4da5-b88d-37ff35602cbe" containerID="5008762af88c0716f6a9e2bf7b01d17a5610c8b0d499b3911c209199ba4850de" exitCode=0 Oct 02 12:22:50 crc kubenswrapper[4725]: I1002 12:22:50.132540 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fllwf" event={"ID":"07643745-2402-4da5-b88d-37ff35602cbe","Type":"ContainerDied","Data":"5008762af88c0716f6a9e2bf7b01d17a5610c8b0d499b3911c209199ba4850de"} Oct 02 12:22:50 crc kubenswrapper[4725]: I1002 12:22:50.134710 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 12:22:51 crc kubenswrapper[4725]: I1002 12:22:51.147043 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fllwf" event={"ID":"07643745-2402-4da5-b88d-37ff35602cbe","Type":"ContainerStarted","Data":"beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2"} Oct 02 12:22:52 crc kubenswrapper[4725]: I1002 12:22:52.177807 4725 generic.go:334] "Generic (PLEG): container finished" podID="07643745-2402-4da5-b88d-37ff35602cbe" containerID="beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2" exitCode=0 Oct 02 12:22:52 crc kubenswrapper[4725]: I1002 12:22:52.178177 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fllwf" event={"ID":"07643745-2402-4da5-b88d-37ff35602cbe","Type":"ContainerDied","Data":"beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2"} Oct 02 12:22:53 crc kubenswrapper[4725]: I1002 12:22:53.190434 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fllwf" event={"ID":"07643745-2402-4da5-b88d-37ff35602cbe","Type":"ContainerStarted","Data":"8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb"} Oct 02 12:22:53 crc kubenswrapper[4725]: I1002 12:22:53.219784 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fllwf" podStartSLOduration=2.694353178 podStartE2EDuration="5.219766612s" podCreationTimestamp="2025-10-02 12:22:48 +0000 UTC" firstStartedPulling="2025-10-02 12:22:50.134488522 +0000 UTC m=+3290.041987985" lastFinishedPulling="2025-10-02 12:22:52.659901946 +0000 UTC m=+3292.567401419" observedRunningTime="2025-10-02 12:22:53.212432705 +0000 UTC m=+3293.119932178" watchObservedRunningTime="2025-10-02 12:22:53.219766612 +0000 UTC m=+3293.127266065" Oct 02 12:22:55 crc kubenswrapper[4725]: I1002 12:22:55.268092 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:22:55 crc kubenswrapper[4725]: E1002 12:22:55.268578 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:22:58 crc kubenswrapper[4725]: I1002 12:22:58.561197 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:58 crc kubenswrapper[4725]: I1002 12:22:58.561772 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:58 crc kubenswrapper[4725]: I1002 12:22:58.613111 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:59 crc kubenswrapper[4725]: I1002 12:22:59.303325 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:22:59 crc kubenswrapper[4725]: I1002 12:22:59.367614 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fllwf"] Oct 02 12:23:01 crc kubenswrapper[4725]: I1002 12:23:01.302292 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fllwf" podUID="07643745-2402-4da5-b88d-37ff35602cbe" containerName="registry-server" containerID="cri-o://8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb" gracePeriod=2 Oct 02 12:23:01 crc kubenswrapper[4725]: E1002 12:23:01.734285 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07643745_2402_4da5_b88d_37ff35602cbe.slice/crio-8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb.scope\": RecentStats: unable to find data in memory cache]" Oct 02 12:23:01 crc kubenswrapper[4725]: I1002 12:23:01.934353 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.036026 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-catalog-content\") pod \"07643745-2402-4da5-b88d-37ff35602cbe\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.036106 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-utilities\") pod \"07643745-2402-4da5-b88d-37ff35602cbe\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.036366 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdb8j\" (UniqueName: \"kubernetes.io/projected/07643745-2402-4da5-b88d-37ff35602cbe-kube-api-access-kdb8j\") pod \"07643745-2402-4da5-b88d-37ff35602cbe\" (UID: \"07643745-2402-4da5-b88d-37ff35602cbe\") " Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.038452 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-utilities" (OuterVolumeSpecName: "utilities") pod "07643745-2402-4da5-b88d-37ff35602cbe" (UID: "07643745-2402-4da5-b88d-37ff35602cbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.045096 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07643745-2402-4da5-b88d-37ff35602cbe-kube-api-access-kdb8j" (OuterVolumeSpecName: "kube-api-access-kdb8j") pod "07643745-2402-4da5-b88d-37ff35602cbe" (UID: "07643745-2402-4da5-b88d-37ff35602cbe"). InnerVolumeSpecName "kube-api-access-kdb8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.047762 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07643745-2402-4da5-b88d-37ff35602cbe" (UID: "07643745-2402-4da5-b88d-37ff35602cbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.138828 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdb8j\" (UniqueName: \"kubernetes.io/projected/07643745-2402-4da5-b88d-37ff35602cbe-kube-api-access-kdb8j\") on node \"crc\" DevicePath \"\"" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.138888 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.138907 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07643745-2402-4da5-b88d-37ff35602cbe-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.289526 4725 generic.go:334] "Generic (PLEG): container finished" podID="07643745-2402-4da5-b88d-37ff35602cbe" containerID="8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb" exitCode=0 Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.289596 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fllwf" event={"ID":"07643745-2402-4da5-b88d-37ff35602cbe","Type":"ContainerDied","Data":"8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb"} Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.289630 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fllwf" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.289980 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fllwf" event={"ID":"07643745-2402-4da5-b88d-37ff35602cbe","Type":"ContainerDied","Data":"5d2fdba7575980f998f2c65f376922b4da70c24f4f1149a74afbb160447120b6"} Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.290092 4725 scope.go:117] "RemoveContainer" containerID="8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.313562 4725 scope.go:117] "RemoveContainer" containerID="beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.336911 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fllwf"] Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.346688 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fllwf"] Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.374516 4725 scope.go:117] "RemoveContainer" containerID="5008762af88c0716f6a9e2bf7b01d17a5610c8b0d499b3911c209199ba4850de" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.428811 4725 scope.go:117] "RemoveContainer" containerID="8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb" Oct 02 12:23:02 crc kubenswrapper[4725]: E1002 12:23:02.429901 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb\": container with ID starting with 8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb not found: ID does not exist" containerID="8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.430438 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb"} err="failed to get container status \"8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb\": rpc error: code = NotFound desc = could not find container \"8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb\": container with ID starting with 8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb not found: ID does not exist" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.430491 4725 scope.go:117] "RemoveContainer" containerID="beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2" Oct 02 12:23:02 crc kubenswrapper[4725]: E1002 12:23:02.431561 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2\": container with ID starting with beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2 not found: ID does not exist" containerID="beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.431613 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2"} err="failed to get container status \"beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2\": rpc error: code = NotFound desc = could not find container \"beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2\": container with ID starting with beae7ff2ee49afd54d05dbb1671745346aaf68f63f40c626f910fe21048df2a2 not found: ID does not exist" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.431648 4725 scope.go:117] "RemoveContainer" containerID="5008762af88c0716f6a9e2bf7b01d17a5610c8b0d499b3911c209199ba4850de" Oct 02 12:23:02 crc kubenswrapper[4725]: E1002 12:23:02.432337 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5008762af88c0716f6a9e2bf7b01d17a5610c8b0d499b3911c209199ba4850de\": container with ID starting with 5008762af88c0716f6a9e2bf7b01d17a5610c8b0d499b3911c209199ba4850de not found: ID does not exist" containerID="5008762af88c0716f6a9e2bf7b01d17a5610c8b0d499b3911c209199ba4850de" Oct 02 12:23:02 crc kubenswrapper[4725]: I1002 12:23:02.432405 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5008762af88c0716f6a9e2bf7b01d17a5610c8b0d499b3911c209199ba4850de"} err="failed to get container status \"5008762af88c0716f6a9e2bf7b01d17a5610c8b0d499b3911c209199ba4850de\": rpc error: code = NotFound desc = could not find container \"5008762af88c0716f6a9e2bf7b01d17a5610c8b0d499b3911c209199ba4850de\": container with ID starting with 5008762af88c0716f6a9e2bf7b01d17a5610c8b0d499b3911c209199ba4850de not found: ID does not exist" Oct 02 12:23:03 crc kubenswrapper[4725]: I1002 12:23:03.280535 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07643745-2402-4da5-b88d-37ff35602cbe" path="/var/lib/kubelet/pods/07643745-2402-4da5-b88d-37ff35602cbe/volumes" Oct 02 12:23:07 crc kubenswrapper[4725]: I1002 12:23:07.269113 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:23:07 crc kubenswrapper[4725]: E1002 12:23:07.270265 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:23:11 crc kubenswrapper[4725]: E1002 12:23:11.978638 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07643745_2402_4da5_b88d_37ff35602cbe.slice/crio-8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb.scope\": RecentStats: unable to find data in memory cache]" Oct 02 12:23:19 crc kubenswrapper[4725]: I1002 12:23:19.269668 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:23:19 crc kubenswrapper[4725]: E1002 12:23:19.270471 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:23:22 crc kubenswrapper[4725]: E1002 12:23:22.236855 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07643745_2402_4da5_b88d_37ff35602cbe.slice/crio-8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb.scope\": RecentStats: unable to find data in memory cache]" Oct 02 12:23:32 crc kubenswrapper[4725]: E1002 12:23:32.496453 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07643745_2402_4da5_b88d_37ff35602cbe.slice/crio-8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb.scope\": RecentStats: unable to find data in memory cache]" Oct 02 12:23:33 crc kubenswrapper[4725]: I1002 12:23:33.268214 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:23:33 crc kubenswrapper[4725]: E1002 12:23:33.268622 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:23:42 crc kubenswrapper[4725]: E1002 12:23:42.822478 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07643745_2402_4da5_b88d_37ff35602cbe.slice/crio-8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb.scope\": RecentStats: unable to find data in memory cache]" Oct 02 12:23:46 crc kubenswrapper[4725]: I1002 12:23:46.268620 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:23:46 crc kubenswrapper[4725]: E1002 12:23:46.270038 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:23:53 crc kubenswrapper[4725]: E1002 12:23:53.124149 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07643745_2402_4da5_b88d_37ff35602cbe.slice/crio-8cf97d8803da0d85e00043ef1e5cb5221c750ead4651403d37c67736efe8a5eb.scope\": RecentStats: unable to find data in memory cache]" Oct 02 12:24:01 crc kubenswrapper[4725]: I1002 12:24:01.274709 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:24:01 crc kubenswrapper[4725]: E1002 12:24:01.275585 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:24:13 crc kubenswrapper[4725]: I1002 12:24:13.268562 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:24:13 crc kubenswrapper[4725]: E1002 12:24:13.269353 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:24:25 crc kubenswrapper[4725]: I1002 12:24:25.269201 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:24:25 crc kubenswrapper[4725]: E1002 12:24:25.270506 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:24:37 crc kubenswrapper[4725]: I1002 12:24:37.287585 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:24:37 crc kubenswrapper[4725]: E1002 12:24:37.293636 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:24:49 crc kubenswrapper[4725]: I1002 12:24:49.268612 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:24:49 crc kubenswrapper[4725]: E1002 12:24:49.269476 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:25:00 crc kubenswrapper[4725]: I1002 12:25:00.268111 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:25:00 crc kubenswrapper[4725]: E1002 12:25:00.269129 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:25:15 crc kubenswrapper[4725]: I1002 12:25:15.268560 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:25:15 crc kubenswrapper[4725]: I1002 12:25:15.738464 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"f7582e4d246b2d2d4e30e2b034251d71e5948d501b7154a1dda3b396410120a1"} Oct 02 12:26:03 crc kubenswrapper[4725]: I1002 12:26:03.122045 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="58f46069-09a8-4501-95a3-70b3d03ee211" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.181:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 02 12:26:12 crc kubenswrapper[4725]: I1002 12:26:12.325152 4725 generic.go:334] "Generic (PLEG): container finished" podID="0e4dadf0-6f31-4e9d-8590-5324693180b6" containerID="b039c63dc9d35e98738a8c80b2f1363d310205ed0a46a6925404d976e47e76e6" exitCode=0 Oct 02 12:26:12 crc kubenswrapper[4725]: I1002 12:26:12.325259 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e4dadf0-6f31-4e9d-8590-5324693180b6","Type":"ContainerDied","Data":"b039c63dc9d35e98738a8c80b2f1363d310205ed0a46a6925404d976e47e76e6"} Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.731664 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.854019 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config\") pod \"0e4dadf0-6f31-4e9d-8590-5324693180b6\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.854126 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwd9g\" (UniqueName: \"kubernetes.io/projected/0e4dadf0-6f31-4e9d-8590-5324693180b6-kube-api-access-gwd9g\") pod \"0e4dadf0-6f31-4e9d-8590-5324693180b6\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.854188 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-config-data\") pod \"0e4dadf0-6f31-4e9d-8590-5324693180b6\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.854250 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ca-certs\") pod \"0e4dadf0-6f31-4e9d-8590-5324693180b6\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.854315 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-workdir\") pod \"0e4dadf0-6f31-4e9d-8590-5324693180b6\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.854357 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-temporary\") pod \"0e4dadf0-6f31-4e9d-8590-5324693180b6\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.854389 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0e4dadf0-6f31-4e9d-8590-5324693180b6\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.854432 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config-secret\") pod \"0e4dadf0-6f31-4e9d-8590-5324693180b6\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.854465 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ssh-key\") pod \"0e4dadf0-6f31-4e9d-8590-5324693180b6\" (UID: \"0e4dadf0-6f31-4e9d-8590-5324693180b6\") " Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.856272 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "0e4dadf0-6f31-4e9d-8590-5324693180b6" (UID: "0e4dadf0-6f31-4e9d-8590-5324693180b6"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.856785 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-config-data" (OuterVolumeSpecName: "config-data") pod "0e4dadf0-6f31-4e9d-8590-5324693180b6" (UID: "0e4dadf0-6f31-4e9d-8590-5324693180b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.862676 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "0e4dadf0-6f31-4e9d-8590-5324693180b6" (UID: "0e4dadf0-6f31-4e9d-8590-5324693180b6"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.864584 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4dadf0-6f31-4e9d-8590-5324693180b6-kube-api-access-gwd9g" (OuterVolumeSpecName: "kube-api-access-gwd9g") pod "0e4dadf0-6f31-4e9d-8590-5324693180b6" (UID: "0e4dadf0-6f31-4e9d-8590-5324693180b6"). InnerVolumeSpecName "kube-api-access-gwd9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.885209 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "0e4dadf0-6f31-4e9d-8590-5324693180b6" (UID: "0e4dadf0-6f31-4e9d-8590-5324693180b6"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.889409 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "0e4dadf0-6f31-4e9d-8590-5324693180b6" (UID: "0e4dadf0-6f31-4e9d-8590-5324693180b6"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.902098 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0e4dadf0-6f31-4e9d-8590-5324693180b6" (UID: "0e4dadf0-6f31-4e9d-8590-5324693180b6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.919661 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0e4dadf0-6f31-4e9d-8590-5324693180b6" (UID: "0e4dadf0-6f31-4e9d-8590-5324693180b6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.942003 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0e4dadf0-6f31-4e9d-8590-5324693180b6" (UID: "0e4dadf0-6f31-4e9d-8590-5324693180b6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.957767 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.957804 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwd9g\" (UniqueName: \"kubernetes.io/projected/0e4dadf0-6f31-4e9d-8590-5324693180b6-kube-api-access-gwd9g\") on node \"crc\" DevicePath \"\"" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.957819 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e4dadf0-6f31-4e9d-8590-5324693180b6-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.957830 4725 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.957842 4725 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.957855 4725 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e4dadf0-6f31-4e9d-8590-5324693180b6-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.957892 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.957905 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.957918 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e4dadf0-6f31-4e9d-8590-5324693180b6-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 12:26:13 crc kubenswrapper[4725]: I1002 12:26:13.980640 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 02 12:26:14 crc kubenswrapper[4725]: I1002 12:26:14.059599 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 02 12:26:14 crc kubenswrapper[4725]: I1002 12:26:14.361441 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e4dadf0-6f31-4e9d-8590-5324693180b6","Type":"ContainerDied","Data":"d4b49af898710890196a5787718b580c65c165cb1a9288cdc54046fd87df919d"} Oct 02 12:26:14 crc kubenswrapper[4725]: I1002 12:26:14.361513 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b49af898710890196a5787718b580c65c165cb1a9288cdc54046fd87df919d" Oct 02 12:26:14 crc kubenswrapper[4725]: I1002 12:26:14.361573 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 02 12:26:21 crc kubenswrapper[4725]: I1002 12:26:21.951058 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 02 12:26:21 crc kubenswrapper[4725]: E1002 12:26:21.951985 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07643745-2402-4da5-b88d-37ff35602cbe" containerName="extract-utilities" Oct 02 12:26:21 crc kubenswrapper[4725]: I1002 12:26:21.952000 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="07643745-2402-4da5-b88d-37ff35602cbe" containerName="extract-utilities" Oct 02 12:26:21 crc kubenswrapper[4725]: E1002 12:26:21.952017 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07643745-2402-4da5-b88d-37ff35602cbe" containerName="extract-content" Oct 02 12:26:21 crc kubenswrapper[4725]: I1002 12:26:21.952023 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="07643745-2402-4da5-b88d-37ff35602cbe" containerName="extract-content" Oct 02 12:26:21 crc kubenswrapper[4725]: E1002 12:26:21.952034 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07643745-2402-4da5-b88d-37ff35602cbe" containerName="registry-server" Oct 02 12:26:21 crc kubenswrapper[4725]: I1002 12:26:21.952041 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="07643745-2402-4da5-b88d-37ff35602cbe" containerName="registry-server" Oct 02 12:26:21 crc kubenswrapper[4725]: E1002 12:26:21.952060 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4dadf0-6f31-4e9d-8590-5324693180b6" containerName="tempest-tests-tempest-tests-runner" Oct 02 12:26:21 crc kubenswrapper[4725]: I1002 12:26:21.952067 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4dadf0-6f31-4e9d-8590-5324693180b6" containerName="tempest-tests-tempest-tests-runner" Oct 02 12:26:21 crc kubenswrapper[4725]: I1002 12:26:21.952232 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="07643745-2402-4da5-b88d-37ff35602cbe" containerName="registry-server" Oct 02 12:26:21 crc kubenswrapper[4725]: I1002 12:26:21.952244 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4dadf0-6f31-4e9d-8590-5324693180b6" containerName="tempest-tests-tempest-tests-runner" Oct 02 12:26:21 crc kubenswrapper[4725]: I1002 12:26:21.952837 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 12:26:21 crc kubenswrapper[4725]: I1002 12:26:21.954589 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-967nc" Oct 02 12:26:21 crc kubenswrapper[4725]: I1002 12:26:21.970464 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 02 12:26:22 crc kubenswrapper[4725]: I1002 12:26:22.129949 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7934978f-21e8-4a23-adfb-b9e97b479458\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 12:26:22 crc kubenswrapper[4725]: I1002 12:26:22.130130 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmhw2\" (UniqueName: \"kubernetes.io/projected/7934978f-21e8-4a23-adfb-b9e97b479458-kube-api-access-jmhw2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7934978f-21e8-4a23-adfb-b9e97b479458\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 12:26:22 crc kubenswrapper[4725]: I1002 12:26:22.231711 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7934978f-21e8-4a23-adfb-b9e97b479458\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 12:26:22 crc kubenswrapper[4725]: I1002 12:26:22.231855 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmhw2\" (UniqueName: \"kubernetes.io/projected/7934978f-21e8-4a23-adfb-b9e97b479458-kube-api-access-jmhw2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7934978f-21e8-4a23-adfb-b9e97b479458\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 12:26:22 crc kubenswrapper[4725]: I1002 12:26:22.232212 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7934978f-21e8-4a23-adfb-b9e97b479458\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 12:26:22 crc kubenswrapper[4725]: I1002 12:26:22.253232 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmhw2\" (UniqueName: \"kubernetes.io/projected/7934978f-21e8-4a23-adfb-b9e97b479458-kube-api-access-jmhw2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7934978f-21e8-4a23-adfb-b9e97b479458\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 12:26:22 crc kubenswrapper[4725]: I1002 12:26:22.276772 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7934978f-21e8-4a23-adfb-b9e97b479458\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 12:26:22 crc kubenswrapper[4725]: I1002 12:26:22.288471 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 12:26:22 crc kubenswrapper[4725]: I1002 12:26:22.746736 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 02 12:26:22 crc kubenswrapper[4725]: W1002 12:26:22.759927 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7934978f_21e8_4a23_adfb_b9e97b479458.slice/crio-76625bb50078cce7d2c9e58d54a181ac1792e502323f651047bea845d312fb92 WatchSource:0}: Error finding container 76625bb50078cce7d2c9e58d54a181ac1792e502323f651047bea845d312fb92: Status 404 returned error can't find the container with id 76625bb50078cce7d2c9e58d54a181ac1792e502323f651047bea845d312fb92 Oct 02 12:26:23 crc kubenswrapper[4725]: I1002 12:26:23.454687 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7934978f-21e8-4a23-adfb-b9e97b479458","Type":"ContainerStarted","Data":"76625bb50078cce7d2c9e58d54a181ac1792e502323f651047bea845d312fb92"} Oct 02 12:26:24 crc kubenswrapper[4725]: I1002 12:26:24.469170 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7934978f-21e8-4a23-adfb-b9e97b479458","Type":"ContainerStarted","Data":"0636680440ebdc4da76b99fea226b19e408f6d29fd319f7c0fed4808e05a2913"} Oct 02 12:26:24 crc kubenswrapper[4725]: I1002 12:26:24.491157 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.5661082779999997 podStartE2EDuration="3.49113702s" podCreationTimestamp="2025-10-02 12:26:21 +0000 UTC" firstStartedPulling="2025-10-02 12:26:22.763221383 +0000 UTC m=+3502.670720856" lastFinishedPulling="2025-10-02 12:26:23.688250135 +0000 UTC m=+3503.595749598" observedRunningTime="2025-10-02 12:26:24.483766622 +0000 UTC m=+3504.391266135" watchObservedRunningTime="2025-10-02 12:26:24.49113702 +0000 UTC m=+3504.398636483" Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.755085 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pchnl/must-gather-vzf4d"] Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.757600 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/must-gather-vzf4d" Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.764420 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pchnl"/"kube-root-ca.crt" Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.764660 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pchnl"/"default-dockercfg-q44w8" Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.764894 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pchnl"/"openshift-service-ca.crt" Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.775373 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pchnl/must-gather-vzf4d"] Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.837469 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85qcx\" (UniqueName: \"kubernetes.io/projected/71dc8102-d90b-430d-8edd-5661f65956a7-kube-api-access-85qcx\") pod \"must-gather-vzf4d\" (UID: \"71dc8102-d90b-430d-8edd-5661f65956a7\") " pod="openshift-must-gather-pchnl/must-gather-vzf4d" Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.837582 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/71dc8102-d90b-430d-8edd-5661f65956a7-must-gather-output\") pod \"must-gather-vzf4d\" (UID: \"71dc8102-d90b-430d-8edd-5661f65956a7\") " pod="openshift-must-gather-pchnl/must-gather-vzf4d" Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.939151 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85qcx\" (UniqueName: \"kubernetes.io/projected/71dc8102-d90b-430d-8edd-5661f65956a7-kube-api-access-85qcx\") pod \"must-gather-vzf4d\" (UID: \"71dc8102-d90b-430d-8edd-5661f65956a7\") " pod="openshift-must-gather-pchnl/must-gather-vzf4d" Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.939212 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/71dc8102-d90b-430d-8edd-5661f65956a7-must-gather-output\") pod \"must-gather-vzf4d\" (UID: \"71dc8102-d90b-430d-8edd-5661f65956a7\") " pod="openshift-must-gather-pchnl/must-gather-vzf4d" Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.939711 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/71dc8102-d90b-430d-8edd-5661f65956a7-must-gather-output\") pod \"must-gather-vzf4d\" (UID: \"71dc8102-d90b-430d-8edd-5661f65956a7\") " pod="openshift-must-gather-pchnl/must-gather-vzf4d" Oct 02 12:26:41 crc kubenswrapper[4725]: I1002 12:26:41.957458 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85qcx\" (UniqueName: \"kubernetes.io/projected/71dc8102-d90b-430d-8edd-5661f65956a7-kube-api-access-85qcx\") pod \"must-gather-vzf4d\" (UID: \"71dc8102-d90b-430d-8edd-5661f65956a7\") " pod="openshift-must-gather-pchnl/must-gather-vzf4d" Oct 02 12:26:42 crc kubenswrapper[4725]: I1002 12:26:42.077343 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/must-gather-vzf4d" Oct 02 12:26:42 crc kubenswrapper[4725]: I1002 12:26:42.558591 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pchnl/must-gather-vzf4d"] Oct 02 12:26:42 crc kubenswrapper[4725]: I1002 12:26:42.679911 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/must-gather-vzf4d" event={"ID":"71dc8102-d90b-430d-8edd-5661f65956a7","Type":"ContainerStarted","Data":"38db34783c2abdeb92bee0287c47c97fbf13fdf029275974fe273ff450724e9d"} Oct 02 12:26:50 crc kubenswrapper[4725]: I1002 12:26:50.765137 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/must-gather-vzf4d" event={"ID":"71dc8102-d90b-430d-8edd-5661f65956a7","Type":"ContainerStarted","Data":"876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4"} Oct 02 12:26:50 crc kubenswrapper[4725]: I1002 12:26:50.765638 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/must-gather-vzf4d" event={"ID":"71dc8102-d90b-430d-8edd-5661f65956a7","Type":"ContainerStarted","Data":"763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103"} Oct 02 12:26:50 crc kubenswrapper[4725]: I1002 12:26:50.782467 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pchnl/must-gather-vzf4d" podStartSLOduration=2.312139308 podStartE2EDuration="9.782445132s" podCreationTimestamp="2025-10-02 12:26:41 +0000 UTC" firstStartedPulling="2025-10-02 12:26:42.55723896 +0000 UTC m=+3522.464738433" lastFinishedPulling="2025-10-02 12:26:50.027544754 +0000 UTC m=+3529.935044257" observedRunningTime="2025-10-02 12:26:50.777421017 +0000 UTC m=+3530.684920490" watchObservedRunningTime="2025-10-02 12:26:50.782445132 +0000 UTC m=+3530.689944595" Oct 02 12:26:55 crc kubenswrapper[4725]: I1002 12:26:55.198179 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pchnl/crc-debug-scd78"] Oct 02 12:26:55 crc kubenswrapper[4725]: I1002 12:26:55.200031 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-scd78" Oct 02 12:26:55 crc kubenswrapper[4725]: I1002 12:26:55.308008 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7znw\" (UniqueName: \"kubernetes.io/projected/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-kube-api-access-h7znw\") pod \"crc-debug-scd78\" (UID: \"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13\") " pod="openshift-must-gather-pchnl/crc-debug-scd78" Oct 02 12:26:55 crc kubenswrapper[4725]: I1002 12:26:55.308056 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-host\") pod \"crc-debug-scd78\" (UID: \"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13\") " pod="openshift-must-gather-pchnl/crc-debug-scd78" Oct 02 12:26:55 crc kubenswrapper[4725]: I1002 12:26:55.409527 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7znw\" (UniqueName: \"kubernetes.io/projected/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-kube-api-access-h7znw\") pod \"crc-debug-scd78\" (UID: \"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13\") " pod="openshift-must-gather-pchnl/crc-debug-scd78" Oct 02 12:26:55 crc kubenswrapper[4725]: I1002 12:26:55.409578 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-host\") pod \"crc-debug-scd78\" (UID: \"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13\") " pod="openshift-must-gather-pchnl/crc-debug-scd78" Oct 02 12:26:55 crc kubenswrapper[4725]: I1002 12:26:55.409751 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-host\") pod \"crc-debug-scd78\" (UID: \"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13\") " pod="openshift-must-gather-pchnl/crc-debug-scd78" Oct 02 12:26:55 crc kubenswrapper[4725]: I1002 12:26:55.430560 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7znw\" (UniqueName: \"kubernetes.io/projected/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-kube-api-access-h7znw\") pod \"crc-debug-scd78\" (UID: \"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13\") " pod="openshift-must-gather-pchnl/crc-debug-scd78" Oct 02 12:26:55 crc kubenswrapper[4725]: I1002 12:26:55.520586 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-scd78" Oct 02 12:26:55 crc kubenswrapper[4725]: I1002 12:26:55.817949 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/crc-debug-scd78" event={"ID":"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13","Type":"ContainerStarted","Data":"14357596cb525c82f498c8e801313460f8bb16ce8116fdbb8e3db89624b5ba5d"} Oct 02 12:27:08 crc kubenswrapper[4725]: I1002 12:27:08.946251 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/crc-debug-scd78" event={"ID":"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13","Type":"ContainerStarted","Data":"0c2ea6eeddf7ba03d7ff1495b65226d9ead1c3ed0ac95936a6af4415af67080c"} Oct 02 12:27:08 crc kubenswrapper[4725]: I1002 12:27:08.972838 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pchnl/crc-debug-scd78" podStartSLOduration=0.884648496 podStartE2EDuration="13.972817655s" podCreationTimestamp="2025-10-02 12:26:55 +0000 UTC" firstStartedPulling="2025-10-02 12:26:55.559192856 +0000 UTC m=+3535.466692319" lastFinishedPulling="2025-10-02 12:27:08.647361995 +0000 UTC m=+3548.554861478" observedRunningTime="2025-10-02 12:27:08.961930833 +0000 UTC m=+3548.869430306" watchObservedRunningTime="2025-10-02 12:27:08.972817655 +0000 UTC m=+3548.880317148" Oct 02 12:27:44 crc kubenswrapper[4725]: I1002 12:27:44.978371 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:27:44 crc kubenswrapper[4725]: I1002 12:27:44.978953 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:28:03 crc kubenswrapper[4725]: I1002 12:28:03.850901 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d64c5b6c4-wjr9t_d2ef4726-c6b9-4bb3-909d-af176b24f2c8/barbican-api-log/0.log" Oct 02 12:28:03 crc kubenswrapper[4725]: I1002 12:28:03.886983 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d64c5b6c4-wjr9t_d2ef4726-c6b9-4bb3-909d-af176b24f2c8/barbican-api/0.log" Oct 02 12:28:04 crc kubenswrapper[4725]: I1002 12:28:04.104833 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-796f86f598-mgjgh_7a8e9323-4d6b-4015-80bb-5d2752bfd94c/barbican-keystone-listener/0.log" Oct 02 12:28:04 crc kubenswrapper[4725]: I1002 12:28:04.148990 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-796f86f598-mgjgh_7a8e9323-4d6b-4015-80bb-5d2752bfd94c/barbican-keystone-listener-log/0.log" Oct 02 12:28:04 crc kubenswrapper[4725]: I1002 12:28:04.350462 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5798d58dff-jkj6h_2ba9160b-539e-40a1-8d2f-4cb0f25e4084/barbican-worker/0.log" Oct 02 12:28:04 crc kubenswrapper[4725]: I1002 12:28:04.388406 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5798d58dff-jkj6h_2ba9160b-539e-40a1-8d2f-4cb0f25e4084/barbican-worker-log/0.log" Oct 02 12:28:04 crc kubenswrapper[4725]: I1002 12:28:04.568990 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5_58fb4d5d-e01c-4ede-91c8-9674a71c34a1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:04 crc kubenswrapper[4725]: I1002 12:28:04.765139 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930/ceilometer-central-agent/0.log" Oct 02 12:28:04 crc kubenswrapper[4725]: I1002 12:28:04.797763 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930/ceilometer-notification-agent/0.log" Oct 02 12:28:04 crc kubenswrapper[4725]: I1002 12:28:04.860980 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930/proxy-httpd/0.log" Oct 02 12:28:04 crc kubenswrapper[4725]: I1002 12:28:04.995218 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930/sg-core/0.log" Oct 02 12:28:05 crc kubenswrapper[4725]: I1002 12:28:05.121094 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9ced9aab-c4e7-4463-9d29-d32521d07220/cinder-api/0.log" Oct 02 12:28:05 crc kubenswrapper[4725]: I1002 12:28:05.230084 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9ced9aab-c4e7-4463-9d29-d32521d07220/cinder-api-log/0.log" Oct 02 12:28:05 crc kubenswrapper[4725]: I1002 12:28:05.345982 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_58f46069-09a8-4501-95a3-70b3d03ee211/cinder-scheduler/0.log" Oct 02 12:28:05 crc kubenswrapper[4725]: I1002 12:28:05.453583 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_58f46069-09a8-4501-95a3-70b3d03ee211/probe/0.log" Oct 02 12:28:05 crc kubenswrapper[4725]: I1002 12:28:05.534475 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2_15f9480b-ec9f-48d1-9778-1376f2c1245e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:05 crc kubenswrapper[4725]: I1002 12:28:05.762915 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r_b1d44487-97d6-4e7d-856d-61aec07be83c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:05 crc kubenswrapper[4725]: I1002 12:28:05.935336 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-z874b_2864a400-a21a-4c43-b078-16fece86e8fb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:06 crc kubenswrapper[4725]: I1002 12:28:06.060476 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-xrt5j_d151b71f-87ba-40a0-8858-99f129ac1e55/init/0.log" Oct 02 12:28:06 crc kubenswrapper[4725]: I1002 12:28:06.269636 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-xrt5j_d151b71f-87ba-40a0-8858-99f129ac1e55/init/0.log" Oct 02 12:28:06 crc kubenswrapper[4725]: I1002 12:28:06.356140 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-xrt5j_d151b71f-87ba-40a0-8858-99f129ac1e55/dnsmasq-dns/0.log" Oct 02 12:28:06 crc kubenswrapper[4725]: I1002 12:28:06.515261 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9_05b5e1c6-efe9-4a6f-a623-c058ae2e301a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:06 crc kubenswrapper[4725]: I1002 12:28:06.551529 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_138ed7a6-24d0-4071-b142-ece9a296eb65/glance-httpd/0.log" Oct 02 12:28:06 crc kubenswrapper[4725]: I1002 12:28:06.753564 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_138ed7a6-24d0-4071-b142-ece9a296eb65/glance-log/0.log" Oct 02 12:28:07 crc kubenswrapper[4725]: I1002 12:28:07.038836 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_55c8573f-3cb6-4d8c-8b84-dfa5f6221f42/glance-log/0.log" Oct 02 12:28:07 crc kubenswrapper[4725]: I1002 12:28:07.062262 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_55c8573f-3cb6-4d8c-8b84-dfa5f6221f42/glance-httpd/0.log" Oct 02 12:28:07 crc kubenswrapper[4725]: I1002 12:28:07.480300 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b797cdcc6-7cf2m_9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8/horizon/0.log" Oct 02 12:28:07 crc kubenswrapper[4725]: I1002 12:28:07.578444 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-hgh68_72c6c1e1-6428-4023-b56e-ee525bc50c65/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:07 crc kubenswrapper[4725]: I1002 12:28:07.727953 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b797cdcc6-7cf2m_9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8/horizon-log/0.log" Oct 02 12:28:07 crc kubenswrapper[4725]: I1002 12:28:07.741705 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-nq8fs_25f013e8-9c08-40f3-84d8-2ddcb5528f44/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:07 crc kubenswrapper[4725]: I1002 12:28:07.902557 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29323441-5xvm4_7aa98170-54f3-4694-95d1-22b25f1512ba/keystone-cron/0.log" Oct 02 12:28:08 crc kubenswrapper[4725]: I1002 12:28:08.098641 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-f5bf68656-dnz2c_7458e87c-8d2c-4e87-9577-c718b49f9e85/keystone-api/0.log" Oct 02 12:28:08 crc kubenswrapper[4725]: I1002 12:28:08.149530 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e4b7ce88-f603-426c-9af7-b2cccde7469d/kube-state-metrics/0.log" Oct 02 12:28:08 crc kubenswrapper[4725]: I1002 12:28:08.525358 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt_c525e8cd-4d87-4d2b-9d78-73199eebbbee/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:08 crc kubenswrapper[4725]: I1002 12:28:08.740091 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-855d67b977-b45rh_714afd76-15e2-4584-a68c-50f3d524f3da/neutron-api/0.log" Oct 02 12:28:08 crc kubenswrapper[4725]: I1002 12:28:08.762325 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-855d67b977-b45rh_714afd76-15e2-4584-a68c-50f3d524f3da/neutron-httpd/0.log" Oct 02 12:28:08 crc kubenswrapper[4725]: I1002 12:28:08.995848 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc_c4d85400-0823-4e3c-b7b2-f7902c817c33/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:09 crc kubenswrapper[4725]: I1002 12:28:09.589706 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_91849e10-6e8e-466f-a603-1c15622941c6/nova-api-log/0.log" Oct 02 12:28:09 crc kubenswrapper[4725]: I1002 12:28:09.607771 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_91849e10-6e8e-466f-a603-1c15622941c6/nova-api-api/0.log" Oct 02 12:28:09 crc kubenswrapper[4725]: I1002 12:28:09.688338 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bdeadf2b-92a4-44ae-803a-493d9ef4a7c2/nova-cell0-conductor-conductor/0.log" Oct 02 12:28:09 crc kubenswrapper[4725]: I1002 12:28:09.930823 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_431a2433-2959-4ab9-a6ed-2dc9dc8ef55a/nova-cell1-conductor-conductor/0.log" Oct 02 12:28:10 crc kubenswrapper[4725]: I1002 12:28:10.131560 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_eb9c8f07-9e51-46ad-87b1-a71668a04d3d/nova-cell1-novncproxy-novncproxy/0.log" Oct 02 12:28:10 crc kubenswrapper[4725]: I1002 12:28:10.226389 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-jxhn6_e1ff53ac-ab56-4b7f-99b1-4ed1c299af72/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:10 crc kubenswrapper[4725]: I1002 12:28:10.677557 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_be5ed584-4418-4447-8ed6-2e89c70e903b/nova-metadata-log/0.log" Oct 02 12:28:11 crc kubenswrapper[4725]: I1002 12:28:11.051259 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_19a3659e-e721-4f41-932d-978e69b77755/nova-scheduler-scheduler/0.log" Oct 02 12:28:11 crc kubenswrapper[4725]: I1002 12:28:11.259115 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d9085121-a59b-4dbc-95fa-2a61f0432970/mysql-bootstrap/0.log" Oct 02 12:28:11 crc kubenswrapper[4725]: I1002 12:28:11.430494 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d9085121-a59b-4dbc-95fa-2a61f0432970/mysql-bootstrap/0.log" Oct 02 12:28:11 crc kubenswrapper[4725]: I1002 12:28:11.440331 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d9085121-a59b-4dbc-95fa-2a61f0432970/galera/0.log" Oct 02 12:28:11 crc kubenswrapper[4725]: I1002 12:28:11.639658 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b64d7b7-832c-4a08-96e5-27fcd2c01988/mysql-bootstrap/0.log" Oct 02 12:28:11 crc kubenswrapper[4725]: I1002 12:28:11.795037 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_be5ed584-4418-4447-8ed6-2e89c70e903b/nova-metadata-metadata/0.log" Oct 02 12:28:11 crc kubenswrapper[4725]: I1002 12:28:11.904771 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b64d7b7-832c-4a08-96e5-27fcd2c01988/mysql-bootstrap/0.log" Oct 02 12:28:11 crc kubenswrapper[4725]: I1002 12:28:11.975179 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b64d7b7-832c-4a08-96e5-27fcd2c01988/galera/0.log" Oct 02 12:28:12 crc kubenswrapper[4725]: I1002 12:28:12.115528 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a694a92f-563d-41d0-908e-744aec98dd01/openstackclient/0.log" Oct 02 12:28:12 crc kubenswrapper[4725]: I1002 12:28:12.292364 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-98gqf_ba80438e-e220-487f-b365-27a8224c7ef2/ovn-controller/0.log" Oct 02 12:28:12 crc kubenswrapper[4725]: I1002 12:28:12.485215 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2mzwk_d765fdd7-c196-4fdc-b5ae-813c10a8bd2b/openstack-network-exporter/0.log" Oct 02 12:28:12 crc kubenswrapper[4725]: I1002 12:28:12.557611 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b2r45_f4f0e1eb-7dd0-4937-a789-b9edb4de3ade/ovsdb-server-init/0.log" Oct 02 12:28:12 crc kubenswrapper[4725]: I1002 12:28:12.841059 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b2r45_f4f0e1eb-7dd0-4937-a789-b9edb4de3ade/ovs-vswitchd/0.log" Oct 02 12:28:12 crc kubenswrapper[4725]: I1002 12:28:12.883236 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b2r45_f4f0e1eb-7dd0-4937-a789-b9edb4de3ade/ovsdb-server/0.log" Oct 02 12:28:12 crc kubenswrapper[4725]: I1002 12:28:12.909913 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b2r45_f4f0e1eb-7dd0-4937-a789-b9edb4de3ade/ovsdb-server-init/0.log" Oct 02 12:28:13 crc kubenswrapper[4725]: I1002 12:28:13.177916 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ctk5t_db895bfa-9a45-45f3-8214-cf8c9e1a1351/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:13 crc kubenswrapper[4725]: I1002 12:28:13.334985 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041/openstack-network-exporter/0.log" Oct 02 12:28:13 crc kubenswrapper[4725]: I1002 12:28:13.430972 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041/ovn-northd/0.log" Oct 02 12:28:13 crc kubenswrapper[4725]: I1002 12:28:13.524743 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a3c790f7-722b-4693-a9cc-ba649c5833ca/openstack-network-exporter/0.log" Oct 02 12:28:13 crc kubenswrapper[4725]: I1002 12:28:13.702760 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a3c790f7-722b-4693-a9cc-ba649c5833ca/ovsdbserver-nb/0.log" Oct 02 12:28:13 crc kubenswrapper[4725]: I1002 12:28:13.782338 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f8dd7ed6-4794-4bf8-8d40-8bb837848eed/openstack-network-exporter/0.log" Oct 02 12:28:13 crc kubenswrapper[4725]: I1002 12:28:13.912748 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f8dd7ed6-4794-4bf8-8d40-8bb837848eed/ovsdbserver-sb/0.log" Oct 02 12:28:14 crc kubenswrapper[4725]: I1002 12:28:14.049528 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9b656dd8b-n4tcm_7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a/placement-api/0.log" Oct 02 12:28:14 crc kubenswrapper[4725]: I1002 12:28:14.224874 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9b656dd8b-n4tcm_7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a/placement-log/0.log" Oct 02 12:28:14 crc kubenswrapper[4725]: I1002 12:28:14.294143 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bc55a26c-8109-4994-812d-1dd87f46d791/setup-container/0.log" Oct 02 12:28:14 crc kubenswrapper[4725]: I1002 12:28:14.452752 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bc55a26c-8109-4994-812d-1dd87f46d791/setup-container/0.log" Oct 02 12:28:14 crc kubenswrapper[4725]: I1002 12:28:14.452906 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bc55a26c-8109-4994-812d-1dd87f46d791/rabbitmq/0.log" Oct 02 12:28:14 crc kubenswrapper[4725]: I1002 12:28:14.702344 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a1b44c9c-40f8-4c5e-8616-76e24df2ee97/setup-container/0.log" Oct 02 12:28:14 crc kubenswrapper[4725]: I1002 12:28:14.895171 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a1b44c9c-40f8-4c5e-8616-76e24df2ee97/setup-container/0.log" Oct 02 12:28:14 crc kubenswrapper[4725]: I1002 12:28:14.903768 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a1b44c9c-40f8-4c5e-8616-76e24df2ee97/rabbitmq/0.log" Oct 02 12:28:14 crc kubenswrapper[4725]: I1002 12:28:14.977738 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:28:14 crc kubenswrapper[4725]: I1002 12:28:14.979201 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:28:15 crc kubenswrapper[4725]: I1002 12:28:15.097872 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-472wk_e04f0ae5-20a3-47c1-a877-d717d8d7feb8/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:15 crc kubenswrapper[4725]: I1002 12:28:15.196083 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-qt8kt_c238f16b-d636-421b-bbbf-53870c63c217/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:15 crc kubenswrapper[4725]: I1002 12:28:15.370049 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq_d070422a-2b6f-42b9-8765-6f630ad4b68f/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:15 crc kubenswrapper[4725]: I1002 12:28:15.548503 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-97mpr_058f1e15-ad8b-4b61-a8e9-f98422ba2151/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:15 crc kubenswrapper[4725]: I1002 12:28:15.691448 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9kgwd_3377a61c-191d-4a8c-a342-9556746ea6e0/ssh-known-hosts-edpm-deployment/0.log" Oct 02 12:28:15 crc kubenswrapper[4725]: I1002 12:28:15.934884 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76bb8577f-p858j_92f1433d-ba22-410b-b18f-b048e5ac47a7/proxy-httpd/0.log" Oct 02 12:28:15 crc kubenswrapper[4725]: I1002 12:28:15.946464 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76bb8577f-p858j_92f1433d-ba22-410b-b18f-b048e5ac47a7/proxy-server/0.log" Oct 02 12:28:16 crc kubenswrapper[4725]: I1002 12:28:16.137854 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-zsfz8_66f1562e-003f-4f29-a7ba-2c42b823662e/swift-ring-rebalance/0.log" Oct 02 12:28:16 crc kubenswrapper[4725]: I1002 12:28:16.322499 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/account-auditor/0.log" Oct 02 12:28:16 crc kubenswrapper[4725]: I1002 12:28:16.390309 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/account-reaper/0.log" Oct 02 12:28:16 crc kubenswrapper[4725]: I1002 12:28:16.460166 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/account-replicator/0.log" Oct 02 12:28:16 crc kubenswrapper[4725]: I1002 12:28:16.509869 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/account-server/0.log" Oct 02 12:28:16 crc kubenswrapper[4725]: I1002 12:28:16.573242 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/container-auditor/0.log" Oct 02 12:28:16 crc kubenswrapper[4725]: I1002 12:28:16.702453 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/container-replicator/0.log" Oct 02 12:28:16 crc kubenswrapper[4725]: I1002 12:28:16.763665 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/container-server/0.log" Oct 02 12:28:16 crc kubenswrapper[4725]: I1002 12:28:16.795790 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/container-updater/0.log" Oct 02 12:28:16 crc kubenswrapper[4725]: I1002 12:28:16.937171 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/object-auditor/0.log" Oct 02 12:28:16 crc kubenswrapper[4725]: I1002 12:28:16.955601 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/object-expirer/0.log" Oct 02 12:28:17 crc kubenswrapper[4725]: I1002 12:28:17.065919 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/object-replicator/0.log" Oct 02 12:28:17 crc kubenswrapper[4725]: I1002 12:28:17.180087 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/object-server/0.log" Oct 02 12:28:17 crc kubenswrapper[4725]: I1002 12:28:17.235783 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/object-updater/0.log" Oct 02 12:28:17 crc kubenswrapper[4725]: I1002 12:28:17.300847 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/rsync/0.log" Oct 02 12:28:17 crc kubenswrapper[4725]: I1002 12:28:17.408242 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/swift-recon-cron/0.log" Oct 02 12:28:17 crc kubenswrapper[4725]: I1002 12:28:17.595023 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-csk6j_b3aedfea-069e-4211-878c-b85e0bb9d3ac/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:17 crc kubenswrapper[4725]: I1002 12:28:17.744701 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0e4dadf0-6f31-4e9d-8590-5324693180b6/tempest-tests-tempest-tests-runner/0.log" Oct 02 12:28:17 crc kubenswrapper[4725]: I1002 12:28:17.902762 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7934978f-21e8-4a23-adfb-b9e97b479458/test-operator-logs-container/0.log" Oct 02 12:28:18 crc kubenswrapper[4725]: I1002 12:28:18.138602 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls_b10901fd-a5d7-431f-a105-ff03a7554335/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:28:23 crc kubenswrapper[4725]: I1002 12:28:23.698545 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4/memcached/0.log" Oct 02 12:28:44 crc kubenswrapper[4725]: I1002 12:28:44.978855 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:28:44 crc kubenswrapper[4725]: I1002 12:28:44.979440 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:28:44 crc kubenswrapper[4725]: I1002 12:28:44.979500 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 12:28:44 crc kubenswrapper[4725]: I1002 12:28:44.980413 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7582e4d246b2d2d4e30e2b034251d71e5948d501b7154a1dda3b396410120a1"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 12:28:44 crc kubenswrapper[4725]: I1002 12:28:44.980489 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://f7582e4d246b2d2d4e30e2b034251d71e5948d501b7154a1dda3b396410120a1" gracePeriod=600 Oct 02 12:28:45 crc kubenswrapper[4725]: I1002 12:28:45.930280 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="f7582e4d246b2d2d4e30e2b034251d71e5948d501b7154a1dda3b396410120a1" exitCode=0 Oct 02 12:28:45 crc kubenswrapper[4725]: I1002 12:28:45.930310 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"f7582e4d246b2d2d4e30e2b034251d71e5948d501b7154a1dda3b396410120a1"} Oct 02 12:28:45 crc kubenswrapper[4725]: I1002 12:28:45.930784 4725 scope.go:117] "RemoveContainer" containerID="d8e4680ff4b0f7ce9949cf9fdb76c2358a3b606882a2ac8858201823fe78e491" Oct 02 12:28:45 crc kubenswrapper[4725]: I1002 12:28:45.930704 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b"} Oct 02 12:29:29 crc kubenswrapper[4725]: I1002 12:29:29.503130 4725 generic.go:334] "Generic (PLEG): container finished" podID="3ee123fa-8df6-4c82-9aa0-74df6a9c2a13" containerID="0c2ea6eeddf7ba03d7ff1495b65226d9ead1c3ed0ac95936a6af4415af67080c" exitCode=0 Oct 02 12:29:29 crc kubenswrapper[4725]: I1002 12:29:29.503215 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/crc-debug-scd78" event={"ID":"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13","Type":"ContainerDied","Data":"0c2ea6eeddf7ba03d7ff1495b65226d9ead1c3ed0ac95936a6af4415af67080c"} Oct 02 12:29:30 crc kubenswrapper[4725]: I1002 12:29:30.612112 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-scd78" Oct 02 12:29:30 crc kubenswrapper[4725]: I1002 12:29:30.647991 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pchnl/crc-debug-scd78"] Oct 02 12:29:30 crc kubenswrapper[4725]: I1002 12:29:30.656024 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pchnl/crc-debug-scd78"] Oct 02 12:29:30 crc kubenswrapper[4725]: I1002 12:29:30.778347 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7znw\" (UniqueName: \"kubernetes.io/projected/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-kube-api-access-h7znw\") pod \"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13\" (UID: \"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13\") " Oct 02 12:29:30 crc kubenswrapper[4725]: I1002 12:29:30.778427 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-host\") pod \"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13\" (UID: \"3ee123fa-8df6-4c82-9aa0-74df6a9c2a13\") " Oct 02 12:29:30 crc kubenswrapper[4725]: I1002 12:29:30.778580 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-host" (OuterVolumeSpecName: "host") pod "3ee123fa-8df6-4c82-9aa0-74df6a9c2a13" (UID: "3ee123fa-8df6-4c82-9aa0-74df6a9c2a13"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 12:29:30 crc kubenswrapper[4725]: I1002 12:29:30.779320 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-host\") on node \"crc\" DevicePath \"\"" Oct 02 12:29:30 crc kubenswrapper[4725]: I1002 12:29:30.783741 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-kube-api-access-h7znw" (OuterVolumeSpecName: "kube-api-access-h7znw") pod "3ee123fa-8df6-4c82-9aa0-74df6a9c2a13" (UID: "3ee123fa-8df6-4c82-9aa0-74df6a9c2a13"). InnerVolumeSpecName "kube-api-access-h7znw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:29:30 crc kubenswrapper[4725]: I1002 12:29:30.880956 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7znw\" (UniqueName: \"kubernetes.io/projected/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13-kube-api-access-h7znw\") on node \"crc\" DevicePath \"\"" Oct 02 12:29:31 crc kubenswrapper[4725]: I1002 12:29:31.320007 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ee123fa-8df6-4c82-9aa0-74df6a9c2a13" path="/var/lib/kubelet/pods/3ee123fa-8df6-4c82-9aa0-74df6a9c2a13/volumes" Oct 02 12:29:31 crc kubenswrapper[4725]: I1002 12:29:31.521635 4725 scope.go:117] "RemoveContainer" containerID="0c2ea6eeddf7ba03d7ff1495b65226d9ead1c3ed0ac95936a6af4415af67080c" Oct 02 12:29:31 crc kubenswrapper[4725]: I1002 12:29:31.521668 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-scd78" Oct 02 12:29:31 crc kubenswrapper[4725]: I1002 12:29:31.841847 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pchnl/crc-debug-m8scf"] Oct 02 12:29:31 crc kubenswrapper[4725]: E1002 12:29:31.842238 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee123fa-8df6-4c82-9aa0-74df6a9c2a13" containerName="container-00" Oct 02 12:29:31 crc kubenswrapper[4725]: I1002 12:29:31.842250 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee123fa-8df6-4c82-9aa0-74df6a9c2a13" containerName="container-00" Oct 02 12:29:31 crc kubenswrapper[4725]: I1002 12:29:31.842427 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee123fa-8df6-4c82-9aa0-74df6a9c2a13" containerName="container-00" Oct 02 12:29:31 crc kubenswrapper[4725]: I1002 12:29:31.843019 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-m8scf" Oct 02 12:29:32 crc kubenswrapper[4725]: I1002 12:29:32.004053 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22d1acc2-1001-45af-9671-994dab7256e9-host\") pod \"crc-debug-m8scf\" (UID: \"22d1acc2-1001-45af-9671-994dab7256e9\") " pod="openshift-must-gather-pchnl/crc-debug-m8scf" Oct 02 12:29:32 crc kubenswrapper[4725]: I1002 12:29:32.004493 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfqf\" (UniqueName: \"kubernetes.io/projected/22d1acc2-1001-45af-9671-994dab7256e9-kube-api-access-7lfqf\") pod \"crc-debug-m8scf\" (UID: \"22d1acc2-1001-45af-9671-994dab7256e9\") " pod="openshift-must-gather-pchnl/crc-debug-m8scf" Oct 02 12:29:32 crc kubenswrapper[4725]: I1002 12:29:32.105830 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lfqf\" (UniqueName: \"kubernetes.io/projected/22d1acc2-1001-45af-9671-994dab7256e9-kube-api-access-7lfqf\") pod \"crc-debug-m8scf\" (UID: \"22d1acc2-1001-45af-9671-994dab7256e9\") " pod="openshift-must-gather-pchnl/crc-debug-m8scf" Oct 02 12:29:32 crc kubenswrapper[4725]: I1002 12:29:32.105929 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22d1acc2-1001-45af-9671-994dab7256e9-host\") pod \"crc-debug-m8scf\" (UID: \"22d1acc2-1001-45af-9671-994dab7256e9\") " pod="openshift-must-gather-pchnl/crc-debug-m8scf" Oct 02 12:29:32 crc kubenswrapper[4725]: I1002 12:29:32.106042 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22d1acc2-1001-45af-9671-994dab7256e9-host\") pod \"crc-debug-m8scf\" (UID: \"22d1acc2-1001-45af-9671-994dab7256e9\") " pod="openshift-must-gather-pchnl/crc-debug-m8scf" Oct 02 12:29:32 crc kubenswrapper[4725]: I1002 12:29:32.132375 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lfqf\" (UniqueName: \"kubernetes.io/projected/22d1acc2-1001-45af-9671-994dab7256e9-kube-api-access-7lfqf\") pod \"crc-debug-m8scf\" (UID: \"22d1acc2-1001-45af-9671-994dab7256e9\") " pod="openshift-must-gather-pchnl/crc-debug-m8scf" Oct 02 12:29:32 crc kubenswrapper[4725]: I1002 12:29:32.163636 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-m8scf" Oct 02 12:29:32 crc kubenswrapper[4725]: I1002 12:29:32.532495 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/crc-debug-m8scf" event={"ID":"22d1acc2-1001-45af-9671-994dab7256e9","Type":"ContainerStarted","Data":"64f6d7a637590ac2cc519e15f1000680314e88959ac650e428887572d93521f7"} Oct 02 12:29:32 crc kubenswrapper[4725]: I1002 12:29:32.532833 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/crc-debug-m8scf" event={"ID":"22d1acc2-1001-45af-9671-994dab7256e9","Type":"ContainerStarted","Data":"1097b06cd492bfccb76f42e3332cbbcc65ac380d965899c89e875be38aa12271"} Oct 02 12:29:32 crc kubenswrapper[4725]: I1002 12:29:32.546087 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pchnl/crc-debug-m8scf" podStartSLOduration=1.5460627439999999 podStartE2EDuration="1.546062744s" podCreationTimestamp="2025-10-02 12:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 12:29:32.544185244 +0000 UTC m=+3692.451684717" watchObservedRunningTime="2025-10-02 12:29:32.546062744 +0000 UTC m=+3692.453562217" Oct 02 12:29:33 crc kubenswrapper[4725]: I1002 12:29:33.543713 4725 generic.go:334] "Generic (PLEG): container finished" podID="22d1acc2-1001-45af-9671-994dab7256e9" containerID="64f6d7a637590ac2cc519e15f1000680314e88959ac650e428887572d93521f7" exitCode=0 Oct 02 12:29:33 crc kubenswrapper[4725]: I1002 12:29:33.544014 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/crc-debug-m8scf" event={"ID":"22d1acc2-1001-45af-9671-994dab7256e9","Type":"ContainerDied","Data":"64f6d7a637590ac2cc519e15f1000680314e88959ac650e428887572d93521f7"} Oct 02 12:29:34 crc kubenswrapper[4725]: I1002 12:29:34.642879 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-m8scf" Oct 02 12:29:34 crc kubenswrapper[4725]: I1002 12:29:34.749040 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lfqf\" (UniqueName: \"kubernetes.io/projected/22d1acc2-1001-45af-9671-994dab7256e9-kube-api-access-7lfqf\") pod \"22d1acc2-1001-45af-9671-994dab7256e9\" (UID: \"22d1acc2-1001-45af-9671-994dab7256e9\") " Oct 02 12:29:34 crc kubenswrapper[4725]: I1002 12:29:34.749096 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22d1acc2-1001-45af-9671-994dab7256e9-host\") pod \"22d1acc2-1001-45af-9671-994dab7256e9\" (UID: \"22d1acc2-1001-45af-9671-994dab7256e9\") " Oct 02 12:29:34 crc kubenswrapper[4725]: I1002 12:29:34.749650 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22d1acc2-1001-45af-9671-994dab7256e9-host" (OuterVolumeSpecName: "host") pod "22d1acc2-1001-45af-9671-994dab7256e9" (UID: "22d1acc2-1001-45af-9671-994dab7256e9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 12:29:34 crc kubenswrapper[4725]: I1002 12:29:34.764047 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d1acc2-1001-45af-9671-994dab7256e9-kube-api-access-7lfqf" (OuterVolumeSpecName: "kube-api-access-7lfqf") pod "22d1acc2-1001-45af-9671-994dab7256e9" (UID: "22d1acc2-1001-45af-9671-994dab7256e9"). InnerVolumeSpecName "kube-api-access-7lfqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:29:34 crc kubenswrapper[4725]: I1002 12:29:34.853712 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lfqf\" (UniqueName: \"kubernetes.io/projected/22d1acc2-1001-45af-9671-994dab7256e9-kube-api-access-7lfqf\") on node \"crc\" DevicePath \"\"" Oct 02 12:29:34 crc kubenswrapper[4725]: I1002 12:29:34.853750 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22d1acc2-1001-45af-9671-994dab7256e9-host\") on node \"crc\" DevicePath \"\"" Oct 02 12:29:35 crc kubenswrapper[4725]: I1002 12:29:35.561498 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/crc-debug-m8scf" event={"ID":"22d1acc2-1001-45af-9671-994dab7256e9","Type":"ContainerDied","Data":"1097b06cd492bfccb76f42e3332cbbcc65ac380d965899c89e875be38aa12271"} Oct 02 12:29:35 crc kubenswrapper[4725]: I1002 12:29:35.561548 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-m8scf" Oct 02 12:29:35 crc kubenswrapper[4725]: I1002 12:29:35.561568 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1097b06cd492bfccb76f42e3332cbbcc65ac380d965899c89e875be38aa12271" Oct 02 12:29:38 crc kubenswrapper[4725]: I1002 12:29:38.873498 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pchnl/crc-debug-m8scf"] Oct 02 12:29:38 crc kubenswrapper[4725]: I1002 12:29:38.888440 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pchnl/crc-debug-m8scf"] Oct 02 12:29:39 crc kubenswrapper[4725]: I1002 12:29:39.280286 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d1acc2-1001-45af-9671-994dab7256e9" path="/var/lib/kubelet/pods/22d1acc2-1001-45af-9671-994dab7256e9/volumes" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.076752 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pchnl/crc-debug-j8zrw"] Oct 02 12:29:40 crc kubenswrapper[4725]: E1002 12:29:40.077563 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d1acc2-1001-45af-9671-994dab7256e9" containerName="container-00" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.077581 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d1acc2-1001-45af-9671-994dab7256e9" containerName="container-00" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.077852 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d1acc2-1001-45af-9671-994dab7256e9" containerName="container-00" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.078747 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-j8zrw" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.138840 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8cf58188-6555-4b76-bf53-78def7f7ef12-host\") pod \"crc-debug-j8zrw\" (UID: \"8cf58188-6555-4b76-bf53-78def7f7ef12\") " pod="openshift-must-gather-pchnl/crc-debug-j8zrw" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.139024 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75lvp\" (UniqueName: \"kubernetes.io/projected/8cf58188-6555-4b76-bf53-78def7f7ef12-kube-api-access-75lvp\") pod \"crc-debug-j8zrw\" (UID: \"8cf58188-6555-4b76-bf53-78def7f7ef12\") " pod="openshift-must-gather-pchnl/crc-debug-j8zrw" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.240880 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75lvp\" (UniqueName: \"kubernetes.io/projected/8cf58188-6555-4b76-bf53-78def7f7ef12-kube-api-access-75lvp\") pod \"crc-debug-j8zrw\" (UID: \"8cf58188-6555-4b76-bf53-78def7f7ef12\") " pod="openshift-must-gather-pchnl/crc-debug-j8zrw" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.240990 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8cf58188-6555-4b76-bf53-78def7f7ef12-host\") pod \"crc-debug-j8zrw\" (UID: \"8cf58188-6555-4b76-bf53-78def7f7ef12\") " pod="openshift-must-gather-pchnl/crc-debug-j8zrw" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.241069 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8cf58188-6555-4b76-bf53-78def7f7ef12-host\") pod \"crc-debug-j8zrw\" (UID: \"8cf58188-6555-4b76-bf53-78def7f7ef12\") " pod="openshift-must-gather-pchnl/crc-debug-j8zrw" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.275440 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75lvp\" (UniqueName: \"kubernetes.io/projected/8cf58188-6555-4b76-bf53-78def7f7ef12-kube-api-access-75lvp\") pod \"crc-debug-j8zrw\" (UID: \"8cf58188-6555-4b76-bf53-78def7f7ef12\") " pod="openshift-must-gather-pchnl/crc-debug-j8zrw" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.401963 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-j8zrw" Oct 02 12:29:40 crc kubenswrapper[4725]: I1002 12:29:40.606373 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/crc-debug-j8zrw" event={"ID":"8cf58188-6555-4b76-bf53-78def7f7ef12","Type":"ContainerStarted","Data":"3b271120d1c34e1ae3bc1d49b2e963c0047c2c20078eecdcd75849d5881946af"} Oct 02 12:29:41 crc kubenswrapper[4725]: I1002 12:29:41.622520 4725 generic.go:334] "Generic (PLEG): container finished" podID="8cf58188-6555-4b76-bf53-78def7f7ef12" containerID="9dc059abc5c2a27554ef5a9131baa0241fdb6a80ae6483fe953604ea86119c68" exitCode=0 Oct 02 12:29:41 crc kubenswrapper[4725]: I1002 12:29:41.622581 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/crc-debug-j8zrw" event={"ID":"8cf58188-6555-4b76-bf53-78def7f7ef12","Type":"ContainerDied","Data":"9dc059abc5c2a27554ef5a9131baa0241fdb6a80ae6483fe953604ea86119c68"} Oct 02 12:29:41 crc kubenswrapper[4725]: I1002 12:29:41.668152 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pchnl/crc-debug-j8zrw"] Oct 02 12:29:41 crc kubenswrapper[4725]: I1002 12:29:41.677516 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pchnl/crc-debug-j8zrw"] Oct 02 12:29:42 crc kubenswrapper[4725]: I1002 12:29:42.738125 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-j8zrw" Oct 02 12:29:42 crc kubenswrapper[4725]: I1002 12:29:42.889166 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8cf58188-6555-4b76-bf53-78def7f7ef12-host\") pod \"8cf58188-6555-4b76-bf53-78def7f7ef12\" (UID: \"8cf58188-6555-4b76-bf53-78def7f7ef12\") " Oct 02 12:29:42 crc kubenswrapper[4725]: I1002 12:29:42.889281 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75lvp\" (UniqueName: \"kubernetes.io/projected/8cf58188-6555-4b76-bf53-78def7f7ef12-kube-api-access-75lvp\") pod \"8cf58188-6555-4b76-bf53-78def7f7ef12\" (UID: \"8cf58188-6555-4b76-bf53-78def7f7ef12\") " Oct 02 12:29:42 crc kubenswrapper[4725]: I1002 12:29:42.889621 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cf58188-6555-4b76-bf53-78def7f7ef12-host" (OuterVolumeSpecName: "host") pod "8cf58188-6555-4b76-bf53-78def7f7ef12" (UID: "8cf58188-6555-4b76-bf53-78def7f7ef12"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 12:29:42 crc kubenswrapper[4725]: I1002 12:29:42.890064 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8cf58188-6555-4b76-bf53-78def7f7ef12-host\") on node \"crc\" DevicePath \"\"" Oct 02 12:29:42 crc kubenswrapper[4725]: I1002 12:29:42.895856 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf58188-6555-4b76-bf53-78def7f7ef12-kube-api-access-75lvp" (OuterVolumeSpecName: "kube-api-access-75lvp") pod "8cf58188-6555-4b76-bf53-78def7f7ef12" (UID: "8cf58188-6555-4b76-bf53-78def7f7ef12"). InnerVolumeSpecName "kube-api-access-75lvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:29:42 crc kubenswrapper[4725]: I1002 12:29:42.992344 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75lvp\" (UniqueName: \"kubernetes.io/projected/8cf58188-6555-4b76-bf53-78def7f7ef12-kube-api-access-75lvp\") on node \"crc\" DevicePath \"\"" Oct 02 12:29:43 crc kubenswrapper[4725]: I1002 12:29:43.284325 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf58188-6555-4b76-bf53-78def7f7ef12" path="/var/lib/kubelet/pods/8cf58188-6555-4b76-bf53-78def7f7ef12/volumes" Oct 02 12:29:43 crc kubenswrapper[4725]: I1002 12:29:43.401899 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/util/0.log" Oct 02 12:29:43 crc kubenswrapper[4725]: I1002 12:29:43.637656 4725 scope.go:117] "RemoveContainer" containerID="9dc059abc5c2a27554ef5a9131baa0241fdb6a80ae6483fe953604ea86119c68" Oct 02 12:29:43 crc kubenswrapper[4725]: I1002 12:29:43.637777 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/crc-debug-j8zrw" Oct 02 12:29:43 crc kubenswrapper[4725]: I1002 12:29:43.763542 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/pull/0.log" Oct 02 12:29:43 crc kubenswrapper[4725]: I1002 12:29:43.805992 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/util/0.log" Oct 02 12:29:43 crc kubenswrapper[4725]: I1002 12:29:43.822281 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/pull/0.log" Oct 02 12:29:43 crc kubenswrapper[4725]: I1002 12:29:43.957130 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/util/0.log" Oct 02 12:29:43 crc kubenswrapper[4725]: I1002 12:29:43.979419 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/pull/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.014618 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/extract/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.154298 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-n5pzn_a51da7c1-9136-40c8-851a-f7c2d1f7a644/kube-rbac-proxy/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.218133 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-n5pzn_a51da7c1-9136-40c8-851a-f7c2d1f7a644/manager/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.236047 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-6hxv6_2a1bf314-ad40-4055-8373-b05888c06791/kube-rbac-proxy/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.384259 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-6hxv6_2a1bf314-ad40-4055-8373-b05888c06791/manager/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.409455 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-rnbs8_7306cbd5-07f3-48a7-a865-752417bf2e8e/kube-rbac-proxy/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.429056 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-rnbs8_7306cbd5-07f3-48a7-a865-752417bf2e8e/manager/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.579007 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-l56v9_827de292-bc8c-40da-be5f-443d06e48782/kube-rbac-proxy/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.698525 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-l56v9_827de292-bc8c-40da-be5f-443d06e48782/manager/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.803289 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-cv4rz_e66fa8da-eabe-4fe6-8689-961c09641552/kube-rbac-proxy/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.804337 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-cv4rz_e66fa8da-eabe-4fe6-8689-961c09641552/manager/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.923423 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-sr7pb_dfe403d1-c0bb-4570-8b27-714c65d930af/kube-rbac-proxy/0.log" Oct 02 12:29:44 crc kubenswrapper[4725]: I1002 12:29:44.987410 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-sr7pb_dfe403d1-c0bb-4570-8b27-714c65d930af/manager/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.063405 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-9d6c5db85-nzp5n_2d4f9b95-e805-4def-bd1c-35b262ebd01f/kube-rbac-proxy/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.205643 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5cd4858477-hfz8s_57843ab0-f141-436e-847c-71f339bb736b/kube-rbac-proxy/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.255612 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-9d6c5db85-nzp5n_2d4f9b95-e805-4def-bd1c-35b262ebd01f/manager/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.278483 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5cd4858477-hfz8s_57843ab0-f141-436e-847c-71f339bb736b/manager/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.401700 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-bbfd9_9d557980-a1fc-4123-9a45-351264ad1fbc/kube-rbac-proxy/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.488254 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-bbfd9_9d557980-a1fc-4123-9a45-351264ad1fbc/manager/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.603745 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-zcsfx_023a7a0e-9279-4b9b-ba5d-6cd41b2aa729/manager/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.618645 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-zcsfx_023a7a0e-9279-4b9b-ba5d-6cd41b2aa729/kube-rbac-proxy/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.751186 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-d7btl_dd3980d8-2ea7-4dd5-9604-9e09025e4220/kube-rbac-proxy/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.852850 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-d7btl_dd3980d8-2ea7-4dd5-9604-9e09025e4220/manager/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.886702 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-849d5b9b84-gx29d_81a57946-838b-45e0-8a00-a7b50950db67/kube-rbac-proxy/0.log" Oct 02 12:29:45 crc kubenswrapper[4725]: I1002 12:29:45.974487 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-849d5b9b84-gx29d_81a57946-838b-45e0-8a00-a7b50950db67/manager/0.log" Oct 02 12:29:46 crc kubenswrapper[4725]: I1002 12:29:46.069126 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-64cd67b5cb-nlk7p_f4918ab0-3268-4081-bdf8-05df0b51e62b/kube-rbac-proxy/0.log" Oct 02 12:29:46 crc kubenswrapper[4725]: I1002 12:29:46.188837 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-64cd67b5cb-nlk7p_f4918ab0-3268-4081-bdf8-05df0b51e62b/manager/0.log" Oct 02 12:29:46 crc kubenswrapper[4725]: I1002 12:29:46.230161 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7b787867f4-5hpzt_44910e65-f73b-4454-bd9d-8fbbfb18445c/kube-rbac-proxy/0.log" Oct 02 12:29:46 crc kubenswrapper[4725]: I1002 12:29:46.332577 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7b787867f4-5hpzt_44910e65-f73b-4454-bd9d-8fbbfb18445c/manager/0.log" Oct 02 12:29:46 crc kubenswrapper[4725]: I1002 12:29:46.377859 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5869cb545-jwwlx_3732c646-2b59-4238-8466-4c9240bc5b9a/kube-rbac-proxy/0.log" Oct 02 12:29:46 crc kubenswrapper[4725]: I1002 12:29:46.421994 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5869cb545-jwwlx_3732c646-2b59-4238-8466-4c9240bc5b9a/manager/0.log" Oct 02 12:29:46 crc kubenswrapper[4725]: I1002 12:29:46.525947 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8479857cf7-b2ttm_caab214a-7c5d-4d45-bebe-680090c291d8/kube-rbac-proxy/0.log" Oct 02 12:29:46 crc kubenswrapper[4725]: I1002 12:29:46.730121 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-859f658b7-xk7wb_34d68cf3-a46e-4588-abff-0487fe2ceacc/kube-rbac-proxy/0.log" Oct 02 12:29:46 crc kubenswrapper[4725]: I1002 12:29:46.879362 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xxgld_b1d09d2a-fb84-40db-91ed-72875d001d9a/registry-server/0.log" Oct 02 12:29:46 crc kubenswrapper[4725]: I1002 12:29:46.897699 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-859f658b7-xk7wb_34d68cf3-a46e-4588-abff-0487fe2ceacc/operator/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.057736 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-6r9zk_165193eb-72d2-44c8-ad3c-12679db734a1/kube-rbac-proxy/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.142511 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-6r9zk_165193eb-72d2-44c8-ad3c-12679db734a1/manager/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.281184 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-pvtws_fb419c8a-047c-4df7-8120-25624030a3fe/kube-rbac-proxy/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.354359 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-pvtws_fb419c8a-047c-4df7-8120-25624030a3fe/manager/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.487246 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-jhggf_53be820c-d953-4996-96da-4cec8d6b3bf0/operator/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.596674 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-84d6b4b759-25kj8_c4d00c80-69fb-4507-9e14-2a54cdb0b8c5/kube-rbac-proxy/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.666201 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8479857cf7-b2ttm_caab214a-7c5d-4d45-bebe-680090c291d8/manager/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.692267 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-84d6b4b759-25kj8_c4d00c80-69fb-4507-9e14-2a54cdb0b8c5/manager/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.796418 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-fjs4g_6c738c27-b7d2-4e56-b0e5-61c19a279278/kube-rbac-proxy/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.832460 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-fjs4g_6c738c27-b7d2-4e56-b0e5-61c19a279278/manager/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.911244 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-85777745bb-2mchd_d3b254cf-3771-426e-9211-9cd279379d73/kube-rbac-proxy/0.log" Oct 02 12:29:47 crc kubenswrapper[4725]: I1002 12:29:47.953434 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-85777745bb-2mchd_d3b254cf-3771-426e-9211-9cd279379d73/manager/0.log" Oct 02 12:29:48 crc kubenswrapper[4725]: I1002 12:29:48.063953 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b9957f54f-fp4qx_67d56c77-e0a6-4841-9c57-2afc39fcf9db/kube-rbac-proxy/0.log" Oct 02 12:29:48 crc kubenswrapper[4725]: I1002 12:29:48.082498 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b9957f54f-fp4qx_67d56c77-e0a6-4841-9c57-2afc39fcf9db/manager/0.log" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.184042 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7"] Oct 02 12:30:00 crc kubenswrapper[4725]: E1002 12:30:00.185423 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf58188-6555-4b76-bf53-78def7f7ef12" containerName="container-00" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.185447 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf58188-6555-4b76-bf53-78def7f7ef12" containerName="container-00" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.185797 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf58188-6555-4b76-bf53-78def7f7ef12" containerName="container-00" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.186681 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.188888 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.189422 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.200079 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7"] Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.312038 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a18dcab-d17e-48df-811c-bc182f0a6b2c-secret-volume\") pod \"collect-profiles-29323470-5d5f7\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.312119 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4p27\" (UniqueName: \"kubernetes.io/projected/5a18dcab-d17e-48df-811c-bc182f0a6b2c-kube-api-access-k4p27\") pod \"collect-profiles-29323470-5d5f7\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.312316 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a18dcab-d17e-48df-811c-bc182f0a6b2c-config-volume\") pod \"collect-profiles-29323470-5d5f7\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.414038 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a18dcab-d17e-48df-811c-bc182f0a6b2c-config-volume\") pod \"collect-profiles-29323470-5d5f7\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.414543 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a18dcab-d17e-48df-811c-bc182f0a6b2c-secret-volume\") pod \"collect-profiles-29323470-5d5f7\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.414594 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4p27\" (UniqueName: \"kubernetes.io/projected/5a18dcab-d17e-48df-811c-bc182f0a6b2c-kube-api-access-k4p27\") pod \"collect-profiles-29323470-5d5f7\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.414941 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a18dcab-d17e-48df-811c-bc182f0a6b2c-config-volume\") pod \"collect-profiles-29323470-5d5f7\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.421392 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a18dcab-d17e-48df-811c-bc182f0a6b2c-secret-volume\") pod \"collect-profiles-29323470-5d5f7\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.431960 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4p27\" (UniqueName: \"kubernetes.io/projected/5a18dcab-d17e-48df-811c-bc182f0a6b2c-kube-api-access-k4p27\") pod \"collect-profiles-29323470-5d5f7\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.511868 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:00 crc kubenswrapper[4725]: I1002 12:30:00.983757 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7"] Oct 02 12:30:01 crc kubenswrapper[4725]: I1002 12:30:01.823435 4725 generic.go:334] "Generic (PLEG): container finished" podID="5a18dcab-d17e-48df-811c-bc182f0a6b2c" containerID="1d30f5556bc85b1b5852909962b19591a0509beeb6d684488601106652ffc51f" exitCode=0 Oct 02 12:30:01 crc kubenswrapper[4725]: I1002 12:30:01.823677 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" event={"ID":"5a18dcab-d17e-48df-811c-bc182f0a6b2c","Type":"ContainerDied","Data":"1d30f5556bc85b1b5852909962b19591a0509beeb6d684488601106652ffc51f"} Oct 02 12:30:01 crc kubenswrapper[4725]: I1002 12:30:01.823802 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" event={"ID":"5a18dcab-d17e-48df-811c-bc182f0a6b2c","Type":"ContainerStarted","Data":"07e2459c485b7d41cd085ec8385b39193d8a80928f9eef1abc39adbdcb98eaf8"} Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.169594 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.283049 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a18dcab-d17e-48df-811c-bc182f0a6b2c-secret-volume\") pod \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.283239 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a18dcab-d17e-48df-811c-bc182f0a6b2c-config-volume\") pod \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.283328 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4p27\" (UniqueName: \"kubernetes.io/projected/5a18dcab-d17e-48df-811c-bc182f0a6b2c-kube-api-access-k4p27\") pod \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\" (UID: \"5a18dcab-d17e-48df-811c-bc182f0a6b2c\") " Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.283855 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a18dcab-d17e-48df-811c-bc182f0a6b2c-config-volume" (OuterVolumeSpecName: "config-volume") pod "5a18dcab-d17e-48df-811c-bc182f0a6b2c" (UID: "5a18dcab-d17e-48df-811c-bc182f0a6b2c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.288937 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a18dcab-d17e-48df-811c-bc182f0a6b2c-kube-api-access-k4p27" (OuterVolumeSpecName: "kube-api-access-k4p27") pod "5a18dcab-d17e-48df-811c-bc182f0a6b2c" (UID: "5a18dcab-d17e-48df-811c-bc182f0a6b2c"). InnerVolumeSpecName "kube-api-access-k4p27". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.309884 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a18dcab-d17e-48df-811c-bc182f0a6b2c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5a18dcab-d17e-48df-811c-bc182f0a6b2c" (UID: "5a18dcab-d17e-48df-811c-bc182f0a6b2c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.385844 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4p27\" (UniqueName: \"kubernetes.io/projected/5a18dcab-d17e-48df-811c-bc182f0a6b2c-kube-api-access-k4p27\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.385870 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a18dcab-d17e-48df-811c-bc182f0a6b2c-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.385880 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a18dcab-d17e-48df-811c-bc182f0a6b2c-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.567388 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2hrs6_825215a6-1ebc-426c-b54d-f54f6c261f55/control-plane-machine-set-operator/0.log" Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.684987 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ps6mz_c36f3900-450a-437f-9fda-b3c7ccf6b4be/kube-rbac-proxy/0.log" Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.741950 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ps6mz_c36f3900-450a-437f-9fda-b3c7ccf6b4be/machine-api-operator/0.log" Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.844040 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" event={"ID":"5a18dcab-d17e-48df-811c-bc182f0a6b2c","Type":"ContainerDied","Data":"07e2459c485b7d41cd085ec8385b39193d8a80928f9eef1abc39adbdcb98eaf8"} Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.844080 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323470-5d5f7" Oct 02 12:30:03 crc kubenswrapper[4725]: I1002 12:30:03.844093 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07e2459c485b7d41cd085ec8385b39193d8a80928f9eef1abc39adbdcb98eaf8" Oct 02 12:30:04 crc kubenswrapper[4725]: I1002 12:30:04.241245 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp"] Oct 02 12:30:04 crc kubenswrapper[4725]: I1002 12:30:04.249478 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323425-4dnjp"] Oct 02 12:30:05 crc kubenswrapper[4725]: I1002 12:30:05.302097 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="710c6864-7bd0-41ca-b599-3234b6cea3b4" path="/var/lib/kubelet/pods/710c6864-7bd0-41ca-b599-3234b6cea3b4/volumes" Oct 02 12:30:15 crc kubenswrapper[4725]: I1002 12:30:15.693352 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-pjgh6_df4f98fb-c0cd-4e39-b9fd-ec6500f6644e/cert-manager-controller/0.log" Oct 02 12:30:15 crc kubenswrapper[4725]: I1002 12:30:15.872766 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-smqhv_c29a13f6-36a6-49c4-b16d-df2fdcda469b/cert-manager-cainjector/0.log" Oct 02 12:30:15 crc kubenswrapper[4725]: I1002 12:30:15.968037 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-wmnqx_c3b045dd-0aa9-4c6b-8930-16c6ae444847/cert-manager-webhook/0.log" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.289983 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rh6nl"] Oct 02 12:30:25 crc kubenswrapper[4725]: E1002 12:30:25.290851 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a18dcab-d17e-48df-811c-bc182f0a6b2c" containerName="collect-profiles" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.290865 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a18dcab-d17e-48df-811c-bc182f0a6b2c" containerName="collect-profiles" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.291075 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a18dcab-d17e-48df-811c-bc182f0a6b2c" containerName="collect-profiles" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.293614 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.321497 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rh6nl"] Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.409216 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-catalog-content\") pod \"redhat-operators-rh6nl\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.410003 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v7pg\" (UniqueName: \"kubernetes.io/projected/b811236e-88bb-4cb7-8100-110630ae77d9-kube-api-access-8v7pg\") pod \"redhat-operators-rh6nl\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.410025 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-utilities\") pod \"redhat-operators-rh6nl\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.486142 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l64hd"] Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.500370 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.513500 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-catalog-content\") pod \"redhat-operators-rh6nl\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.513592 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v7pg\" (UniqueName: \"kubernetes.io/projected/b811236e-88bb-4cb7-8100-110630ae77d9-kube-api-access-8v7pg\") pod \"redhat-operators-rh6nl\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.513616 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-utilities\") pod \"redhat-operators-rh6nl\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.514177 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-utilities\") pod \"redhat-operators-rh6nl\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.514154 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-catalog-content\") pod \"redhat-operators-rh6nl\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.547049 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l64hd"] Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.547948 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v7pg\" (UniqueName: \"kubernetes.io/projected/b811236e-88bb-4cb7-8100-110630ae77d9-kube-api-access-8v7pg\") pod \"redhat-operators-rh6nl\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.615803 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-utilities\") pod \"community-operators-l64hd\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.615977 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-catalog-content\") pod \"community-operators-l64hd\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.616005 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prltr\" (UniqueName: \"kubernetes.io/projected/2dd136be-02c6-4e9f-aa35-b9d419767949-kube-api-access-prltr\") pod \"community-operators-l64hd\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.619345 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.719939 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-catalog-content\") pod \"community-operators-l64hd\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.720306 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prltr\" (UniqueName: \"kubernetes.io/projected/2dd136be-02c6-4e9f-aa35-b9d419767949-kube-api-access-prltr\") pod \"community-operators-l64hd\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.720370 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-utilities\") pod \"community-operators-l64hd\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.720602 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-catalog-content\") pod \"community-operators-l64hd\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.720929 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-utilities\") pod \"community-operators-l64hd\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.749445 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prltr\" (UniqueName: \"kubernetes.io/projected/2dd136be-02c6-4e9f-aa35-b9d419767949-kube-api-access-prltr\") pod \"community-operators-l64hd\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:25 crc kubenswrapper[4725]: I1002 12:30:25.842158 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:26 crc kubenswrapper[4725]: I1002 12:30:26.121945 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rh6nl"] Oct 02 12:30:26 crc kubenswrapper[4725]: I1002 12:30:26.339998 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l64hd"] Oct 02 12:30:26 crc kubenswrapper[4725]: W1002 12:30:26.341525 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dd136be_02c6_4e9f_aa35_b9d419767949.slice/crio-f4710aa7c34ec2f144631a59e00d5ea0e0d59350abdd79df4b90d9ceb814a91c WatchSource:0}: Error finding container f4710aa7c34ec2f144631a59e00d5ea0e0d59350abdd79df4b90d9ceb814a91c: Status 404 returned error can't find the container with id f4710aa7c34ec2f144631a59e00d5ea0e0d59350abdd79df4b90d9ceb814a91c Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.055154 4725 generic.go:334] "Generic (PLEG): container finished" podID="b811236e-88bb-4cb7-8100-110630ae77d9" containerID="5da4f8749590863265217ec12f81d9cfe21f620e449f2a447fd401e5a24cde03" exitCode=0 Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.055220 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh6nl" event={"ID":"b811236e-88bb-4cb7-8100-110630ae77d9","Type":"ContainerDied","Data":"5da4f8749590863265217ec12f81d9cfe21f620e449f2a447fd401e5a24cde03"} Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.055525 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh6nl" event={"ID":"b811236e-88bb-4cb7-8100-110630ae77d9","Type":"ContainerStarted","Data":"ed596783235f197765fae75e75f3cafe5652a36dbca992694e0bfa2eecb09392"} Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.057982 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.057998 4725 generic.go:334] "Generic (PLEG): container finished" podID="2dd136be-02c6-4e9f-aa35-b9d419767949" containerID="eb7011bdb31fc39a44c6bd0bcfde20c5036aa1b6b35e1c748baf88daa1611123" exitCode=0 Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.058036 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l64hd" event={"ID":"2dd136be-02c6-4e9f-aa35-b9d419767949","Type":"ContainerDied","Data":"eb7011bdb31fc39a44c6bd0bcfde20c5036aa1b6b35e1c748baf88daa1611123"} Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.058063 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l64hd" event={"ID":"2dd136be-02c6-4e9f-aa35-b9d419767949","Type":"ContainerStarted","Data":"f4710aa7c34ec2f144631a59e00d5ea0e0d59350abdd79df4b90d9ceb814a91c"} Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.694323 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sqkzj"] Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.696425 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.725680 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqkzj"] Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.866881 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b967738-3f35-4166-98e2-bc3face118fd-catalog-content\") pod \"certified-operators-sqkzj\" (UID: \"1b967738-3f35-4166-98e2-bc3face118fd\") " pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.866972 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5cm9\" (UniqueName: \"kubernetes.io/projected/1b967738-3f35-4166-98e2-bc3face118fd-kube-api-access-w5cm9\") pod \"certified-operators-sqkzj\" (UID: \"1b967738-3f35-4166-98e2-bc3face118fd\") " pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.867051 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b967738-3f35-4166-98e2-bc3face118fd-utilities\") pod \"certified-operators-sqkzj\" (UID: \"1b967738-3f35-4166-98e2-bc3face118fd\") " pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.968572 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b967738-3f35-4166-98e2-bc3face118fd-catalog-content\") pod \"certified-operators-sqkzj\" (UID: \"1b967738-3f35-4166-98e2-bc3face118fd\") " pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.968653 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5cm9\" (UniqueName: \"kubernetes.io/projected/1b967738-3f35-4166-98e2-bc3face118fd-kube-api-access-w5cm9\") pod \"certified-operators-sqkzj\" (UID: \"1b967738-3f35-4166-98e2-bc3face118fd\") " pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.968700 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b967738-3f35-4166-98e2-bc3face118fd-utilities\") pod \"certified-operators-sqkzj\" (UID: \"1b967738-3f35-4166-98e2-bc3face118fd\") " pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.969214 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b967738-3f35-4166-98e2-bc3face118fd-catalog-content\") pod \"certified-operators-sqkzj\" (UID: \"1b967738-3f35-4166-98e2-bc3face118fd\") " pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:27 crc kubenswrapper[4725]: I1002 12:30:27.969245 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b967738-3f35-4166-98e2-bc3face118fd-utilities\") pod \"certified-operators-sqkzj\" (UID: \"1b967738-3f35-4166-98e2-bc3face118fd\") " pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:28 crc kubenswrapper[4725]: I1002 12:30:28.000839 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5cm9\" (UniqueName: \"kubernetes.io/projected/1b967738-3f35-4166-98e2-bc3face118fd-kube-api-access-w5cm9\") pod \"certified-operators-sqkzj\" (UID: \"1b967738-3f35-4166-98e2-bc3face118fd\") " pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:28 crc kubenswrapper[4725]: I1002 12:30:28.018749 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:28 crc kubenswrapper[4725]: W1002 12:30:28.581884 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b967738_3f35_4166_98e2_bc3face118fd.slice/crio-6bcce50e59c3053ba61277d86e6c28284dbd125cc49000e2b1fc8cf610e93a2e WatchSource:0}: Error finding container 6bcce50e59c3053ba61277d86e6c28284dbd125cc49000e2b1fc8cf610e93a2e: Status 404 returned error can't find the container with id 6bcce50e59c3053ba61277d86e6c28284dbd125cc49000e2b1fc8cf610e93a2e Oct 02 12:30:28 crc kubenswrapper[4725]: I1002 12:30:28.582414 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqkzj"] Oct 02 12:30:29 crc kubenswrapper[4725]: I1002 12:30:29.083266 4725 generic.go:334] "Generic (PLEG): container finished" podID="2dd136be-02c6-4e9f-aa35-b9d419767949" containerID="2e18416050a967553819d4e4e4a2a101120602f1c02d40e5cb6105d1f0f0dc01" exitCode=0 Oct 02 12:30:29 crc kubenswrapper[4725]: I1002 12:30:29.083340 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l64hd" event={"ID":"2dd136be-02c6-4e9f-aa35-b9d419767949","Type":"ContainerDied","Data":"2e18416050a967553819d4e4e4a2a101120602f1c02d40e5cb6105d1f0f0dc01"} Oct 02 12:30:29 crc kubenswrapper[4725]: I1002 12:30:29.085372 4725 generic.go:334] "Generic (PLEG): container finished" podID="1b967738-3f35-4166-98e2-bc3face118fd" containerID="2cb45196786ade1a5b4e0a27abd2d5d39c28352082e8f036a464ad8485573f8e" exitCode=0 Oct 02 12:30:29 crc kubenswrapper[4725]: I1002 12:30:29.085443 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqkzj" event={"ID":"1b967738-3f35-4166-98e2-bc3face118fd","Type":"ContainerDied","Data":"2cb45196786ade1a5b4e0a27abd2d5d39c28352082e8f036a464ad8485573f8e"} Oct 02 12:30:29 crc kubenswrapper[4725]: I1002 12:30:29.085470 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqkzj" event={"ID":"1b967738-3f35-4166-98e2-bc3face118fd","Type":"ContainerStarted","Data":"6bcce50e59c3053ba61277d86e6c28284dbd125cc49000e2b1fc8cf610e93a2e"} Oct 02 12:30:29 crc kubenswrapper[4725]: I1002 12:30:29.093549 4725 generic.go:334] "Generic (PLEG): container finished" podID="b811236e-88bb-4cb7-8100-110630ae77d9" containerID="a52550984292e1e09e43056d7481f8c32ffeb8ae428e0b7bc67dd3c5e3465791" exitCode=0 Oct 02 12:30:29 crc kubenswrapper[4725]: I1002 12:30:29.093909 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh6nl" event={"ID":"b811236e-88bb-4cb7-8100-110630ae77d9","Type":"ContainerDied","Data":"a52550984292e1e09e43056d7481f8c32ffeb8ae428e0b7bc67dd3c5e3465791"} Oct 02 12:30:30 crc kubenswrapper[4725]: I1002 12:30:30.105058 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l64hd" event={"ID":"2dd136be-02c6-4e9f-aa35-b9d419767949","Type":"ContainerStarted","Data":"6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d"} Oct 02 12:30:30 crc kubenswrapper[4725]: I1002 12:30:30.108070 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh6nl" event={"ID":"b811236e-88bb-4cb7-8100-110630ae77d9","Type":"ContainerStarted","Data":"089413101e342c6e444336e5993c6481b158f74eae26ff6e50202ab6e5dab435"} Oct 02 12:30:30 crc kubenswrapper[4725]: I1002 12:30:30.129818 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l64hd" podStartSLOduration=2.67807966 podStartE2EDuration="5.129794781s" podCreationTimestamp="2025-10-02 12:30:25 +0000 UTC" firstStartedPulling="2025-10-02 12:30:27.06071101 +0000 UTC m=+3746.968210473" lastFinishedPulling="2025-10-02 12:30:29.512426131 +0000 UTC m=+3749.419925594" observedRunningTime="2025-10-02 12:30:30.127045077 +0000 UTC m=+3750.034544560" watchObservedRunningTime="2025-10-02 12:30:30.129794781 +0000 UTC m=+3750.037294244" Oct 02 12:30:30 crc kubenswrapper[4725]: I1002 12:30:30.302281 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-bm7x4_76aebb91-1906-47b5-8efc-f4e8290b9ffb/nmstate-console-plugin/0.log" Oct 02 12:30:30 crc kubenswrapper[4725]: I1002 12:30:30.644435 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-xgxbz_36df4296-bcd2-4c2a-b6f7-eef03e21d934/kube-rbac-proxy/0.log" Oct 02 12:30:30 crc kubenswrapper[4725]: I1002 12:30:30.646897 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mg76p_7d2d0930-f603-4f33-9da1-f7d372d70912/nmstate-handler/0.log" Oct 02 12:30:30 crc kubenswrapper[4725]: I1002 12:30:30.763971 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-xgxbz_36df4296-bcd2-4c2a-b6f7-eef03e21d934/nmstate-metrics/0.log" Oct 02 12:30:30 crc kubenswrapper[4725]: I1002 12:30:30.884211 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-mdqp5_b4436bab-3a23-4c24-bb5b-fdd06e5c2b78/nmstate-operator/0.log" Oct 02 12:30:30 crc kubenswrapper[4725]: I1002 12:30:30.967852 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-c8s4z_7ff10093-4a58-4838-88a7-cb77f8ae577a/nmstate-webhook/0.log" Oct 02 12:30:31 crc kubenswrapper[4725]: I1002 12:30:31.142985 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rh6nl" podStartSLOduration=3.466593599 podStartE2EDuration="6.142964337s" podCreationTimestamp="2025-10-02 12:30:25 +0000 UTC" firstStartedPulling="2025-10-02 12:30:27.057659628 +0000 UTC m=+3746.965159091" lastFinishedPulling="2025-10-02 12:30:29.734030366 +0000 UTC m=+3749.641529829" observedRunningTime="2025-10-02 12:30:31.134589892 +0000 UTC m=+3751.042089365" watchObservedRunningTime="2025-10-02 12:30:31.142964337 +0000 UTC m=+3751.050463800" Oct 02 12:30:35 crc kubenswrapper[4725]: I1002 12:30:35.619987 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:35 crc kubenswrapper[4725]: I1002 12:30:35.620363 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:35 crc kubenswrapper[4725]: I1002 12:30:35.688318 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:35 crc kubenswrapper[4725]: I1002 12:30:35.843110 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:35 crc kubenswrapper[4725]: I1002 12:30:35.843496 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:35 crc kubenswrapper[4725]: I1002 12:30:35.902984 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:36 crc kubenswrapper[4725]: I1002 12:30:36.222652 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:36 crc kubenswrapper[4725]: I1002 12:30:36.235842 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:37 crc kubenswrapper[4725]: I1002 12:30:37.446688 4725 scope.go:117] "RemoveContainer" containerID="349354862b945d2787c003d2aab7d098babc173adeac549e737c3d109b18a099" Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.077458 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l64hd"] Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.186965 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l64hd" podUID="2dd136be-02c6-4e9f-aa35-b9d419767949" containerName="registry-server" containerID="cri-o://6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d" gracePeriod=2 Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.657923 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.673127 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rh6nl"] Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.673405 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rh6nl" podUID="b811236e-88bb-4cb7-8100-110630ae77d9" containerName="registry-server" containerID="cri-o://089413101e342c6e444336e5993c6481b158f74eae26ff6e50202ab6e5dab435" gracePeriod=2 Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.840817 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-catalog-content\") pod \"2dd136be-02c6-4e9f-aa35-b9d419767949\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.840879 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-utilities\") pod \"2dd136be-02c6-4e9f-aa35-b9d419767949\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.840978 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prltr\" (UniqueName: \"kubernetes.io/projected/2dd136be-02c6-4e9f-aa35-b9d419767949-kube-api-access-prltr\") pod \"2dd136be-02c6-4e9f-aa35-b9d419767949\" (UID: \"2dd136be-02c6-4e9f-aa35-b9d419767949\") " Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.841649 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-utilities" (OuterVolumeSpecName: "utilities") pod "2dd136be-02c6-4e9f-aa35-b9d419767949" (UID: "2dd136be-02c6-4e9f-aa35-b9d419767949"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.842184 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.851899 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd136be-02c6-4e9f-aa35-b9d419767949-kube-api-access-prltr" (OuterVolumeSpecName: "kube-api-access-prltr") pod "2dd136be-02c6-4e9f-aa35-b9d419767949" (UID: "2dd136be-02c6-4e9f-aa35-b9d419767949"). InnerVolumeSpecName "kube-api-access-prltr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.902497 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dd136be-02c6-4e9f-aa35-b9d419767949" (UID: "2dd136be-02c6-4e9f-aa35-b9d419767949"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.944527 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prltr\" (UniqueName: \"kubernetes.io/projected/2dd136be-02c6-4e9f-aa35-b9d419767949-kube-api-access-prltr\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:38 crc kubenswrapper[4725]: I1002 12:30:38.944565 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd136be-02c6-4e9f-aa35-b9d419767949-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.197407 4725 generic.go:334] "Generic (PLEG): container finished" podID="2dd136be-02c6-4e9f-aa35-b9d419767949" containerID="6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d" exitCode=0 Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.197475 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l64hd" event={"ID":"2dd136be-02c6-4e9f-aa35-b9d419767949","Type":"ContainerDied","Data":"6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d"} Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.197796 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l64hd" event={"ID":"2dd136be-02c6-4e9f-aa35-b9d419767949","Type":"ContainerDied","Data":"f4710aa7c34ec2f144631a59e00d5ea0e0d59350abdd79df4b90d9ceb814a91c"} Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.197486 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l64hd" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.197833 4725 scope.go:117] "RemoveContainer" containerID="6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.201700 4725 generic.go:334] "Generic (PLEG): container finished" podID="1b967738-3f35-4166-98e2-bc3face118fd" containerID="0c2c37f24cf63de6977e742a03aea9af7a0e1a375e4b43a2880826b00e7a4fd5" exitCode=0 Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.201783 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqkzj" event={"ID":"1b967738-3f35-4166-98e2-bc3face118fd","Type":"ContainerDied","Data":"0c2c37f24cf63de6977e742a03aea9af7a0e1a375e4b43a2880826b00e7a4fd5"} Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.217090 4725 generic.go:334] "Generic (PLEG): container finished" podID="b811236e-88bb-4cb7-8100-110630ae77d9" containerID="089413101e342c6e444336e5993c6481b158f74eae26ff6e50202ab6e5dab435" exitCode=0 Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.217126 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh6nl" event={"ID":"b811236e-88bb-4cb7-8100-110630ae77d9","Type":"ContainerDied","Data":"089413101e342c6e444336e5993c6481b158f74eae26ff6e50202ab6e5dab435"} Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.217152 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh6nl" event={"ID":"b811236e-88bb-4cb7-8100-110630ae77d9","Type":"ContainerDied","Data":"ed596783235f197765fae75e75f3cafe5652a36dbca992694e0bfa2eecb09392"} Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.217165 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed596783235f197765fae75e75f3cafe5652a36dbca992694e0bfa2eecb09392" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.222025 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.231332 4725 scope.go:117] "RemoveContainer" containerID="2e18416050a967553819d4e4e4a2a101120602f1c02d40e5cb6105d1f0f0dc01" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.253861 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l64hd"] Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.260244 4725 scope.go:117] "RemoveContainer" containerID="eb7011bdb31fc39a44c6bd0bcfde20c5036aa1b6b35e1c748baf88daa1611123" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.292861 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l64hd"] Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.314532 4725 scope.go:117] "RemoveContainer" containerID="6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d" Oct 02 12:30:39 crc kubenswrapper[4725]: E1002 12:30:39.315236 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d\": container with ID starting with 6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d not found: ID does not exist" containerID="6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.315296 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d"} err="failed to get container status \"6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d\": rpc error: code = NotFound desc = could not find container \"6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d\": container with ID starting with 6841de93c9d3cbe67f9b1b77401f9a3a0fce29a0936901039362123ef3f0935d not found: ID does not exist" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.315333 4725 scope.go:117] "RemoveContainer" containerID="2e18416050a967553819d4e4e4a2a101120602f1c02d40e5cb6105d1f0f0dc01" Oct 02 12:30:39 crc kubenswrapper[4725]: E1002 12:30:39.315834 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e18416050a967553819d4e4e4a2a101120602f1c02d40e5cb6105d1f0f0dc01\": container with ID starting with 2e18416050a967553819d4e4e4a2a101120602f1c02d40e5cb6105d1f0f0dc01 not found: ID does not exist" containerID="2e18416050a967553819d4e4e4a2a101120602f1c02d40e5cb6105d1f0f0dc01" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.315862 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e18416050a967553819d4e4e4a2a101120602f1c02d40e5cb6105d1f0f0dc01"} err="failed to get container status \"2e18416050a967553819d4e4e4a2a101120602f1c02d40e5cb6105d1f0f0dc01\": rpc error: code = NotFound desc = could not find container \"2e18416050a967553819d4e4e4a2a101120602f1c02d40e5cb6105d1f0f0dc01\": container with ID starting with 2e18416050a967553819d4e4e4a2a101120602f1c02d40e5cb6105d1f0f0dc01 not found: ID does not exist" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.315880 4725 scope.go:117] "RemoveContainer" containerID="eb7011bdb31fc39a44c6bd0bcfde20c5036aa1b6b35e1c748baf88daa1611123" Oct 02 12:30:39 crc kubenswrapper[4725]: E1002 12:30:39.316243 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb7011bdb31fc39a44c6bd0bcfde20c5036aa1b6b35e1c748baf88daa1611123\": container with ID starting with eb7011bdb31fc39a44c6bd0bcfde20c5036aa1b6b35e1c748baf88daa1611123 not found: ID does not exist" containerID="eb7011bdb31fc39a44c6bd0bcfde20c5036aa1b6b35e1c748baf88daa1611123" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.316265 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb7011bdb31fc39a44c6bd0bcfde20c5036aa1b6b35e1c748baf88daa1611123"} err="failed to get container status \"eb7011bdb31fc39a44c6bd0bcfde20c5036aa1b6b35e1c748baf88daa1611123\": rpc error: code = NotFound desc = could not find container \"eb7011bdb31fc39a44c6bd0bcfde20c5036aa1b6b35e1c748baf88daa1611123\": container with ID starting with eb7011bdb31fc39a44c6bd0bcfde20c5036aa1b6b35e1c748baf88daa1611123 not found: ID does not exist" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.350967 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v7pg\" (UniqueName: \"kubernetes.io/projected/b811236e-88bb-4cb7-8100-110630ae77d9-kube-api-access-8v7pg\") pod \"b811236e-88bb-4cb7-8100-110630ae77d9\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.351046 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-utilities\") pod \"b811236e-88bb-4cb7-8100-110630ae77d9\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.351245 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-catalog-content\") pod \"b811236e-88bb-4cb7-8100-110630ae77d9\" (UID: \"b811236e-88bb-4cb7-8100-110630ae77d9\") " Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.353761 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-utilities" (OuterVolumeSpecName: "utilities") pod "b811236e-88bb-4cb7-8100-110630ae77d9" (UID: "b811236e-88bb-4cb7-8100-110630ae77d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.357910 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b811236e-88bb-4cb7-8100-110630ae77d9-kube-api-access-8v7pg" (OuterVolumeSpecName: "kube-api-access-8v7pg") pod "b811236e-88bb-4cb7-8100-110630ae77d9" (UID: "b811236e-88bb-4cb7-8100-110630ae77d9"). InnerVolumeSpecName "kube-api-access-8v7pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.424379 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b811236e-88bb-4cb7-8100-110630ae77d9" (UID: "b811236e-88bb-4cb7-8100-110630ae77d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.454041 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v7pg\" (UniqueName: \"kubernetes.io/projected/b811236e-88bb-4cb7-8100-110630ae77d9-kube-api-access-8v7pg\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.454080 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:39 crc kubenswrapper[4725]: I1002 12:30:39.454095 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b811236e-88bb-4cb7-8100-110630ae77d9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:40 crc kubenswrapper[4725]: I1002 12:30:40.226957 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh6nl" Oct 02 12:30:40 crc kubenswrapper[4725]: I1002 12:30:40.262988 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rh6nl"] Oct 02 12:30:40 crc kubenswrapper[4725]: I1002 12:30:40.281976 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rh6nl"] Oct 02 12:30:41 crc kubenswrapper[4725]: I1002 12:30:41.263309 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqkzj" event={"ID":"1b967738-3f35-4166-98e2-bc3face118fd","Type":"ContainerStarted","Data":"5a94037b2e8f2f70c629d449d09b5eb0e112d189f0540dba6f3975cb59f575bc"} Oct 02 12:30:41 crc kubenswrapper[4725]: I1002 12:30:41.286200 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sqkzj" podStartSLOduration=3.531335 podStartE2EDuration="14.286184784s" podCreationTimestamp="2025-10-02 12:30:27 +0000 UTC" firstStartedPulling="2025-10-02 12:30:29.086590959 +0000 UTC m=+3748.994090432" lastFinishedPulling="2025-10-02 12:30:39.841440753 +0000 UTC m=+3759.748940216" observedRunningTime="2025-10-02 12:30:41.285120836 +0000 UTC m=+3761.192620319" watchObservedRunningTime="2025-10-02 12:30:41.286184784 +0000 UTC m=+3761.193684247" Oct 02 12:30:41 crc kubenswrapper[4725]: I1002 12:30:41.297205 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd136be-02c6-4e9f-aa35-b9d419767949" path="/var/lib/kubelet/pods/2dd136be-02c6-4e9f-aa35-b9d419767949/volumes" Oct 02 12:30:41 crc kubenswrapper[4725]: I1002 12:30:41.298210 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b811236e-88bb-4cb7-8100-110630ae77d9" path="/var/lib/kubelet/pods/b811236e-88bb-4cb7-8100-110630ae77d9/volumes" Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.019128 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.019751 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.083698 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.379979 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sqkzj" Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.384751 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-lsjhw_b130a44e-650b-4940-a3fa-392c5f797d6f/kube-rbac-proxy/0.log" Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.475100 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-lsjhw_b130a44e-650b-4940-a3fa-392c5f797d6f/controller/0.log" Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.479114 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqkzj"] Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.521649 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6bwvh"] Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.521908 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6bwvh" podUID="2c0bff4f-9811-437e-9938-f5439d5a38b4" containerName="registry-server" containerID="cri-o://cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f" gracePeriod=2 Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.722067 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-frr-files/0.log" Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.880148 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-frr-files/0.log" Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.894267 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-reloader/0.log" Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.975330 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-metrics/0.log" Oct 02 12:30:48 crc kubenswrapper[4725]: I1002 12:30:48.987006 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.058166 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-catalog-content\") pod \"2c0bff4f-9811-437e-9938-f5439d5a38b4\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.058474 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfmp9\" (UniqueName: \"kubernetes.io/projected/2c0bff4f-9811-437e-9938-f5439d5a38b4-kube-api-access-mfmp9\") pod \"2c0bff4f-9811-437e-9938-f5439d5a38b4\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.058888 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-utilities\") pod \"2c0bff4f-9811-437e-9938-f5439d5a38b4\" (UID: \"2c0bff4f-9811-437e-9938-f5439d5a38b4\") " Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.063177 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-utilities" (OuterVolumeSpecName: "utilities") pod "2c0bff4f-9811-437e-9938-f5439d5a38b4" (UID: "2c0bff4f-9811-437e-9938-f5439d5a38b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.089510 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0bff4f-9811-437e-9938-f5439d5a38b4-kube-api-access-mfmp9" (OuterVolumeSpecName: "kube-api-access-mfmp9") pod "2c0bff4f-9811-437e-9938-f5439d5a38b4" (UID: "2c0bff4f-9811-437e-9938-f5439d5a38b4"). InnerVolumeSpecName "kube-api-access-mfmp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.110204 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-reloader/0.log" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.162116 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfmp9\" (UniqueName: \"kubernetes.io/projected/2c0bff4f-9811-437e-9938-f5439d5a38b4-kube-api-access-mfmp9\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.162151 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.169132 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c0bff4f-9811-437e-9938-f5439d5a38b4" (UID: "2c0bff4f-9811-437e-9938-f5439d5a38b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.265649 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0bff4f-9811-437e-9938-f5439d5a38b4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.301914 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-reloader/0.log" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.333863 4725 generic.go:334] "Generic (PLEG): container finished" podID="2c0bff4f-9811-437e-9938-f5439d5a38b4" containerID="cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f" exitCode=0 Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.334697 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bwvh" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.335125 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bwvh" event={"ID":"2c0bff4f-9811-437e-9938-f5439d5a38b4","Type":"ContainerDied","Data":"cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f"} Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.335149 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bwvh" event={"ID":"2c0bff4f-9811-437e-9938-f5439d5a38b4","Type":"ContainerDied","Data":"95ba0e66bb7d25b48523caa338c3127983fe45d982d4e74029e84a5c6f1d1f31"} Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.335163 4725 scope.go:117] "RemoveContainer" containerID="cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.361407 4725 scope.go:117] "RemoveContainer" containerID="29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.361689 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6bwvh"] Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.370552 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6bwvh"] Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.386921 4725 scope.go:117] "RemoveContainer" containerID="8b9c1e1d8c80bd6f93f95f301c3a97aae5a6305da41b388a5385edaf77cb0d1d" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.430808 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-metrics/0.log" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.434604 4725 scope.go:117] "RemoveContainer" containerID="cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f" Oct 02 12:30:49 crc kubenswrapper[4725]: E1002 12:30:49.437067 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f\": container with ID starting with cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f not found: ID does not exist" containerID="cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.437119 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f"} err="failed to get container status \"cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f\": rpc error: code = NotFound desc = could not find container \"cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f\": container with ID starting with cb734bf247647f1eace06c9350a0ff5921abba73b4815cd9deb0416de0019a3f not found: ID does not exist" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.437146 4725 scope.go:117] "RemoveContainer" containerID="29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9" Oct 02 12:30:49 crc kubenswrapper[4725]: E1002 12:30:49.437624 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9\": container with ID starting with 29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9 not found: ID does not exist" containerID="29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.437678 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9"} err="failed to get container status \"29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9\": rpc error: code = NotFound desc = could not find container \"29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9\": container with ID starting with 29c280f78e237dfdbaa39df51f03da3bb6546f319e5e60ad2f0b4cdd36ab6ff9 not found: ID does not exist" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.437706 4725 scope.go:117] "RemoveContainer" containerID="8b9c1e1d8c80bd6f93f95f301c3a97aae5a6305da41b388a5385edaf77cb0d1d" Oct 02 12:30:49 crc kubenswrapper[4725]: E1002 12:30:49.438040 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b9c1e1d8c80bd6f93f95f301c3a97aae5a6305da41b388a5385edaf77cb0d1d\": container with ID starting with 8b9c1e1d8c80bd6f93f95f301c3a97aae5a6305da41b388a5385edaf77cb0d1d not found: ID does not exist" containerID="8b9c1e1d8c80bd6f93f95f301c3a97aae5a6305da41b388a5385edaf77cb0d1d" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.438079 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b9c1e1d8c80bd6f93f95f301c3a97aae5a6305da41b388a5385edaf77cb0d1d"} err="failed to get container status \"8b9c1e1d8c80bd6f93f95f301c3a97aae5a6305da41b388a5385edaf77cb0d1d\": rpc error: code = NotFound desc = could not find container \"8b9c1e1d8c80bd6f93f95f301c3a97aae5a6305da41b388a5385edaf77cb0d1d\": container with ID starting with 8b9c1e1d8c80bd6f93f95f301c3a97aae5a6305da41b388a5385edaf77cb0d1d not found: ID does not exist" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.483092 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-frr-files/0.log" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.581304 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-metrics/0.log" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.684548 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-reloader/0.log" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.723305 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-metrics/0.log" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.739915 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-frr-files/0.log" Oct 02 12:30:49 crc kubenswrapper[4725]: I1002 12:30:49.977336 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/kube-rbac-proxy/0.log" Oct 02 12:30:50 crc kubenswrapper[4725]: I1002 12:30:50.050088 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/controller/0.log" Oct 02 12:30:50 crc kubenswrapper[4725]: I1002 12:30:50.139827 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/frr-metrics/0.log" Oct 02 12:30:50 crc kubenswrapper[4725]: I1002 12:30:50.288173 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/kube-rbac-proxy-frr/0.log" Oct 02 12:30:50 crc kubenswrapper[4725]: I1002 12:30:50.319197 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/reloader/0.log" Oct 02 12:30:50 crc kubenswrapper[4725]: I1002 12:30:50.539489 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-m8jtf_88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1/frr-k8s-webhook-server/0.log" Oct 02 12:30:50 crc kubenswrapper[4725]: I1002 12:30:50.645986 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8cc8c8574-jxrnl_ceb5035d-4044-42f3-be35-b3f861ba059c/manager/0.log" Oct 02 12:30:50 crc kubenswrapper[4725]: I1002 12:30:50.788023 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-757dfd7686-7m9m2_6cada9d9-3b84-4015-b1c9-7bf3de0debbb/webhook-server/0.log" Oct 02 12:30:50 crc kubenswrapper[4725]: I1002 12:30:50.943475 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l2ln9_58368f71-69e1-4da4-9a08-0c7c5b093c4d/kube-rbac-proxy/0.log" Oct 02 12:30:51 crc kubenswrapper[4725]: I1002 12:30:51.278503 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0bff4f-9811-437e-9938-f5439d5a38b4" path="/var/lib/kubelet/pods/2c0bff4f-9811-437e-9938-f5439d5a38b4/volumes" Oct 02 12:30:51 crc kubenswrapper[4725]: I1002 12:30:51.460059 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/frr/0.log" Oct 02 12:30:51 crc kubenswrapper[4725]: I1002 12:30:51.528391 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l2ln9_58368f71-69e1-4da4-9a08-0c7c5b093c4d/speaker/0.log" Oct 02 12:31:05 crc kubenswrapper[4725]: I1002 12:31:05.129069 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/util/0.log" Oct 02 12:31:05 crc kubenswrapper[4725]: I1002 12:31:05.322331 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/util/0.log" Oct 02 12:31:05 crc kubenswrapper[4725]: I1002 12:31:05.324281 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/pull/0.log" Oct 02 12:31:05 crc kubenswrapper[4725]: I1002 12:31:05.334259 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/pull/0.log" Oct 02 12:31:05 crc kubenswrapper[4725]: I1002 12:31:05.506436 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/extract/0.log" Oct 02 12:31:05 crc kubenswrapper[4725]: I1002 12:31:05.508494 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/util/0.log" Oct 02 12:31:05 crc kubenswrapper[4725]: I1002 12:31:05.555891 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/pull/0.log" Oct 02 12:31:05 crc kubenswrapper[4725]: I1002 12:31:05.673637 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-utilities/0.log" Oct 02 12:31:05 crc kubenswrapper[4725]: I1002 12:31:05.881298 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-utilities/0.log" Oct 02 12:31:05 crc kubenswrapper[4725]: I1002 12:31:05.893437 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-content/0.log" Oct 02 12:31:05 crc kubenswrapper[4725]: I1002 12:31:05.901053 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-content/0.log" Oct 02 12:31:06 crc kubenswrapper[4725]: I1002 12:31:06.048399 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-utilities/0.log" Oct 02 12:31:06 crc kubenswrapper[4725]: I1002 12:31:06.069186 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-content/0.log" Oct 02 12:31:06 crc kubenswrapper[4725]: I1002 12:31:06.210269 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/registry-server/0.log" Oct 02 12:31:06 crc kubenswrapper[4725]: I1002 12:31:06.311822 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-utilities/0.log" Oct 02 12:31:06 crc kubenswrapper[4725]: I1002 12:31:06.461959 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-utilities/0.log" Oct 02 12:31:06 crc kubenswrapper[4725]: I1002 12:31:06.466748 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-content/0.log" Oct 02 12:31:06 crc kubenswrapper[4725]: I1002 12:31:06.517631 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-content/0.log" Oct 02 12:31:06 crc kubenswrapper[4725]: I1002 12:31:06.678436 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-utilities/0.log" Oct 02 12:31:06 crc kubenswrapper[4725]: I1002 12:31:06.714806 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-content/0.log" Oct 02 12:31:06 crc kubenswrapper[4725]: I1002 12:31:06.899323 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/util/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.211989 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/util/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.212811 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/pull/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.252471 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/pull/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.378010 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/registry-server/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.454630 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/util/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.476179 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/pull/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.486027 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/extract/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.677903 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-utilities/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.678381 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5zrbg_6b1730b0-4eb3-4a40-86a6-2908a9c9acb2/marketplace-operator/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.884463 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-utilities/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.894120 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-content/0.log" Oct 02 12:31:07 crc kubenswrapper[4725]: I1002 12:31:07.899223 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-content/0.log" Oct 02 12:31:08 crc kubenswrapper[4725]: I1002 12:31:08.075067 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-content/0.log" Oct 02 12:31:08 crc kubenswrapper[4725]: I1002 12:31:08.114010 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-utilities/0.log" Oct 02 12:31:08 crc kubenswrapper[4725]: I1002 12:31:08.218661 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/registry-server/0.log" Oct 02 12:31:08 crc kubenswrapper[4725]: I1002 12:31:08.289829 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-utilities/0.log" Oct 02 12:31:08 crc kubenswrapper[4725]: I1002 12:31:08.502609 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-content/0.log" Oct 02 12:31:08 crc kubenswrapper[4725]: I1002 12:31:08.515764 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-content/0.log" Oct 02 12:31:08 crc kubenswrapper[4725]: I1002 12:31:08.517942 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-utilities/0.log" Oct 02 12:31:08 crc kubenswrapper[4725]: I1002 12:31:08.688360 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-utilities/0.log" Oct 02 12:31:08 crc kubenswrapper[4725]: I1002 12:31:08.713575 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-content/0.log" Oct 02 12:31:09 crc kubenswrapper[4725]: I1002 12:31:09.428021 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/registry-server/0.log" Oct 02 12:31:14 crc kubenswrapper[4725]: I1002 12:31:14.977947 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:31:14 crc kubenswrapper[4725]: I1002 12:31:14.978554 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:31:44 crc kubenswrapper[4725]: E1002 12:31:44.040385 4725 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.162:40610->38.129.56.162:34805: read tcp 38.129.56.162:40610->38.129.56.162:34805: read: connection reset by peer Oct 02 12:31:44 crc kubenswrapper[4725]: I1002 12:31:44.977793 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:31:44 crc kubenswrapper[4725]: I1002 12:31:44.978189 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:32:14 crc kubenswrapper[4725]: I1002 12:32:14.979089 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:32:14 crc kubenswrapper[4725]: I1002 12:32:14.979637 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:32:14 crc kubenswrapper[4725]: I1002 12:32:14.979739 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" Oct 02 12:32:14 crc kubenswrapper[4725]: I1002 12:32:14.980422 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b"} pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 12:32:14 crc kubenswrapper[4725]: I1002 12:32:14.980495 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" containerID="cri-o://f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" gracePeriod=600 Oct 02 12:32:15 crc kubenswrapper[4725]: E1002 12:32:15.112482 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:32:15 crc kubenswrapper[4725]: I1002 12:32:15.179382 4725 generic.go:334] "Generic (PLEG): container finished" podID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" exitCode=0 Oct 02 12:32:15 crc kubenswrapper[4725]: I1002 12:32:15.179425 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerDied","Data":"f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b"} Oct 02 12:32:15 crc kubenswrapper[4725]: I1002 12:32:15.179475 4725 scope.go:117] "RemoveContainer" containerID="f7582e4d246b2d2d4e30e2b034251d71e5948d501b7154a1dda3b396410120a1" Oct 02 12:32:15 crc kubenswrapper[4725]: I1002 12:32:15.188119 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:32:15 crc kubenswrapper[4725]: E1002 12:32:15.205337 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:32:29 crc kubenswrapper[4725]: I1002 12:32:29.269057 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:32:29 crc kubenswrapper[4725]: E1002 12:32:29.270254 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:32:40 crc kubenswrapper[4725]: I1002 12:32:40.268405 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:32:40 crc kubenswrapper[4725]: E1002 12:32:40.269539 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:32:51 crc kubenswrapper[4725]: I1002 12:32:51.286152 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:32:51 crc kubenswrapper[4725]: E1002 12:32:51.286921 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.246348 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vcxzv"] Oct 02 12:32:56 crc kubenswrapper[4725]: E1002 12:32:56.247451 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0bff4f-9811-437e-9938-f5439d5a38b4" containerName="extract-content" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247469 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0bff4f-9811-437e-9938-f5439d5a38b4" containerName="extract-content" Oct 02 12:32:56 crc kubenswrapper[4725]: E1002 12:32:56.247480 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd136be-02c6-4e9f-aa35-b9d419767949" containerName="extract-content" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247488 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd136be-02c6-4e9f-aa35-b9d419767949" containerName="extract-content" Oct 02 12:32:56 crc kubenswrapper[4725]: E1002 12:32:56.247503 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0bff4f-9811-437e-9938-f5439d5a38b4" containerName="registry-server" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247511 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0bff4f-9811-437e-9938-f5439d5a38b4" containerName="registry-server" Oct 02 12:32:56 crc kubenswrapper[4725]: E1002 12:32:56.247523 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b811236e-88bb-4cb7-8100-110630ae77d9" containerName="extract-utilities" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247531 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b811236e-88bb-4cb7-8100-110630ae77d9" containerName="extract-utilities" Oct 02 12:32:56 crc kubenswrapper[4725]: E1002 12:32:56.247541 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd136be-02c6-4e9f-aa35-b9d419767949" containerName="registry-server" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247548 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd136be-02c6-4e9f-aa35-b9d419767949" containerName="registry-server" Oct 02 12:32:56 crc kubenswrapper[4725]: E1002 12:32:56.247564 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd136be-02c6-4e9f-aa35-b9d419767949" containerName="extract-utilities" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247572 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd136be-02c6-4e9f-aa35-b9d419767949" containerName="extract-utilities" Oct 02 12:32:56 crc kubenswrapper[4725]: E1002 12:32:56.247598 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b811236e-88bb-4cb7-8100-110630ae77d9" containerName="registry-server" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247618 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b811236e-88bb-4cb7-8100-110630ae77d9" containerName="registry-server" Oct 02 12:32:56 crc kubenswrapper[4725]: E1002 12:32:56.247633 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b811236e-88bb-4cb7-8100-110630ae77d9" containerName="extract-content" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247643 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b811236e-88bb-4cb7-8100-110630ae77d9" containerName="extract-content" Oct 02 12:32:56 crc kubenswrapper[4725]: E1002 12:32:56.247662 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0bff4f-9811-437e-9938-f5439d5a38b4" containerName="extract-utilities" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247672 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0bff4f-9811-437e-9938-f5439d5a38b4" containerName="extract-utilities" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247917 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b811236e-88bb-4cb7-8100-110630ae77d9" containerName="registry-server" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247942 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0bff4f-9811-437e-9938-f5439d5a38b4" containerName="registry-server" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.247954 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd136be-02c6-4e9f-aa35-b9d419767949" containerName="registry-server" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.249718 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.261568 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcxzv"] Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.360268 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-utilities\") pod \"redhat-marketplace-vcxzv\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.360615 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xzz8\" (UniqueName: \"kubernetes.io/projected/da160180-4f80-49e8-85b4-297a2ef31ff9-kube-api-access-7xzz8\") pod \"redhat-marketplace-vcxzv\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.360946 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-catalog-content\") pod \"redhat-marketplace-vcxzv\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.462665 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-catalog-content\") pod \"redhat-marketplace-vcxzv\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.462858 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-utilities\") pod \"redhat-marketplace-vcxzv\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.462895 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xzz8\" (UniqueName: \"kubernetes.io/projected/da160180-4f80-49e8-85b4-297a2ef31ff9-kube-api-access-7xzz8\") pod \"redhat-marketplace-vcxzv\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.463134 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-catalog-content\") pod \"redhat-marketplace-vcxzv\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.463367 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-utilities\") pod \"redhat-marketplace-vcxzv\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.485666 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xzz8\" (UniqueName: \"kubernetes.io/projected/da160180-4f80-49e8-85b4-297a2ef31ff9-kube-api-access-7xzz8\") pod \"redhat-marketplace-vcxzv\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:32:56 crc kubenswrapper[4725]: I1002 12:32:56.573992 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:32:57 crc kubenswrapper[4725]: I1002 12:32:57.070855 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcxzv"] Oct 02 12:32:57 crc kubenswrapper[4725]: I1002 12:32:57.723905 4725 generic.go:334] "Generic (PLEG): container finished" podID="da160180-4f80-49e8-85b4-297a2ef31ff9" containerID="eeea9bb7a6f1ac7b190798cee0511ad806461742f81305552624faa152905e26" exitCode=0 Oct 02 12:32:57 crc kubenswrapper[4725]: I1002 12:32:57.723963 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcxzv" event={"ID":"da160180-4f80-49e8-85b4-297a2ef31ff9","Type":"ContainerDied","Data":"eeea9bb7a6f1ac7b190798cee0511ad806461742f81305552624faa152905e26"} Oct 02 12:32:57 crc kubenswrapper[4725]: I1002 12:32:57.724160 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcxzv" event={"ID":"da160180-4f80-49e8-85b4-297a2ef31ff9","Type":"ContainerStarted","Data":"b40f097dd85bc04e8515345b5ba52d6ef66690ffd30e8a41425478cbfbbdefa8"} Oct 02 12:32:58 crc kubenswrapper[4725]: I1002 12:32:58.735255 4725 generic.go:334] "Generic (PLEG): container finished" podID="da160180-4f80-49e8-85b4-297a2ef31ff9" containerID="08b51d7436a4b48de514aa2dd39145d9a40e4d297154737ada138750f27451a8" exitCode=0 Oct 02 12:32:58 crc kubenswrapper[4725]: I1002 12:32:58.735692 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcxzv" event={"ID":"da160180-4f80-49e8-85b4-297a2ef31ff9","Type":"ContainerDied","Data":"08b51d7436a4b48de514aa2dd39145d9a40e4d297154737ada138750f27451a8"} Oct 02 12:32:59 crc kubenswrapper[4725]: I1002 12:32:59.751532 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcxzv" event={"ID":"da160180-4f80-49e8-85b4-297a2ef31ff9","Type":"ContainerStarted","Data":"d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889"} Oct 02 12:32:59 crc kubenswrapper[4725]: I1002 12:32:59.783612 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vcxzv" podStartSLOduration=2.252334077 podStartE2EDuration="3.783587743s" podCreationTimestamp="2025-10-02 12:32:56 +0000 UTC" firstStartedPulling="2025-10-02 12:32:57.725571483 +0000 UTC m=+3897.633070956" lastFinishedPulling="2025-10-02 12:32:59.256825149 +0000 UTC m=+3899.164324622" observedRunningTime="2025-10-02 12:32:59.772976427 +0000 UTC m=+3899.680475910" watchObservedRunningTime="2025-10-02 12:32:59.783587743 +0000 UTC m=+3899.691087216" Oct 02 12:33:03 crc kubenswrapper[4725]: I1002 12:33:03.267558 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:33:03 crc kubenswrapper[4725]: E1002 12:33:03.268563 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:33:06 crc kubenswrapper[4725]: I1002 12:33:06.574457 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:33:06 crc kubenswrapper[4725]: I1002 12:33:06.574779 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:33:06 crc kubenswrapper[4725]: I1002 12:33:06.656794 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:33:06 crc kubenswrapper[4725]: I1002 12:33:06.884359 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:33:06 crc kubenswrapper[4725]: I1002 12:33:06.942419 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcxzv"] Oct 02 12:33:07 crc kubenswrapper[4725]: I1002 12:33:07.845332 4725 generic.go:334] "Generic (PLEG): container finished" podID="71dc8102-d90b-430d-8edd-5661f65956a7" containerID="763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103" exitCode=0 Oct 02 12:33:07 crc kubenswrapper[4725]: I1002 12:33:07.845480 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pchnl/must-gather-vzf4d" event={"ID":"71dc8102-d90b-430d-8edd-5661f65956a7","Type":"ContainerDied","Data":"763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103"} Oct 02 12:33:07 crc kubenswrapper[4725]: I1002 12:33:07.850237 4725 scope.go:117] "RemoveContainer" containerID="763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103" Oct 02 12:33:08 crc kubenswrapper[4725]: I1002 12:33:08.704191 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pchnl_must-gather-vzf4d_71dc8102-d90b-430d-8edd-5661f65956a7/gather/0.log" Oct 02 12:33:08 crc kubenswrapper[4725]: I1002 12:33:08.861251 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vcxzv" podUID="da160180-4f80-49e8-85b4-297a2ef31ff9" containerName="registry-server" containerID="cri-o://d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889" gracePeriod=2 Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.396683 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.515576 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-catalog-content\") pod \"da160180-4f80-49e8-85b4-297a2ef31ff9\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.515738 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xzz8\" (UniqueName: \"kubernetes.io/projected/da160180-4f80-49e8-85b4-297a2ef31ff9-kube-api-access-7xzz8\") pod \"da160180-4f80-49e8-85b4-297a2ef31ff9\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.515850 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-utilities\") pod \"da160180-4f80-49e8-85b4-297a2ef31ff9\" (UID: \"da160180-4f80-49e8-85b4-297a2ef31ff9\") " Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.517297 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-utilities" (OuterVolumeSpecName: "utilities") pod "da160180-4f80-49e8-85b4-297a2ef31ff9" (UID: "da160180-4f80-49e8-85b4-297a2ef31ff9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.530664 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da160180-4f80-49e8-85b4-297a2ef31ff9" (UID: "da160180-4f80-49e8-85b4-297a2ef31ff9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.532237 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da160180-4f80-49e8-85b4-297a2ef31ff9-kube-api-access-7xzz8" (OuterVolumeSpecName: "kube-api-access-7xzz8") pod "da160180-4f80-49e8-85b4-297a2ef31ff9" (UID: "da160180-4f80-49e8-85b4-297a2ef31ff9"). InnerVolumeSpecName "kube-api-access-7xzz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.618095 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.618127 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xzz8\" (UniqueName: \"kubernetes.io/projected/da160180-4f80-49e8-85b4-297a2ef31ff9-kube-api-access-7xzz8\") on node \"crc\" DevicePath \"\"" Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.618137 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da160180-4f80-49e8-85b4-297a2ef31ff9-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.879426 4725 generic.go:334] "Generic (PLEG): container finished" podID="da160180-4f80-49e8-85b4-297a2ef31ff9" containerID="d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889" exitCode=0 Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.880050 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcxzv" event={"ID":"da160180-4f80-49e8-85b4-297a2ef31ff9","Type":"ContainerDied","Data":"d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889"} Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.880146 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcxzv" Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.880176 4725 scope.go:117] "RemoveContainer" containerID="d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889" Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.880155 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcxzv" event={"ID":"da160180-4f80-49e8-85b4-297a2ef31ff9","Type":"ContainerDied","Data":"b40f097dd85bc04e8515345b5ba52d6ef66690ffd30e8a41425478cbfbbdefa8"} Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.928231 4725 scope.go:117] "RemoveContainer" containerID="08b51d7436a4b48de514aa2dd39145d9a40e4d297154737ada138750f27451a8" Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.967192 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcxzv"] Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.972684 4725 scope.go:117] "RemoveContainer" containerID="eeea9bb7a6f1ac7b190798cee0511ad806461742f81305552624faa152905e26" Oct 02 12:33:09 crc kubenswrapper[4725]: I1002 12:33:09.977103 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcxzv"] Oct 02 12:33:10 crc kubenswrapper[4725]: I1002 12:33:10.021651 4725 scope.go:117] "RemoveContainer" containerID="d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889" Oct 02 12:33:10 crc kubenswrapper[4725]: E1002 12:33:10.022163 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889\": container with ID starting with d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889 not found: ID does not exist" containerID="d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889" Oct 02 12:33:10 crc kubenswrapper[4725]: I1002 12:33:10.022223 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889"} err="failed to get container status \"d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889\": rpc error: code = NotFound desc = could not find container \"d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889\": container with ID starting with d85ddc091aa325b1d207a37c8fcb36e7243f1c9eb2d0aa1b7b09af9a495d9889 not found: ID does not exist" Oct 02 12:33:10 crc kubenswrapper[4725]: I1002 12:33:10.022255 4725 scope.go:117] "RemoveContainer" containerID="08b51d7436a4b48de514aa2dd39145d9a40e4d297154737ada138750f27451a8" Oct 02 12:33:10 crc kubenswrapper[4725]: E1002 12:33:10.022638 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b51d7436a4b48de514aa2dd39145d9a40e4d297154737ada138750f27451a8\": container with ID starting with 08b51d7436a4b48de514aa2dd39145d9a40e4d297154737ada138750f27451a8 not found: ID does not exist" containerID="08b51d7436a4b48de514aa2dd39145d9a40e4d297154737ada138750f27451a8" Oct 02 12:33:10 crc kubenswrapper[4725]: I1002 12:33:10.022668 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b51d7436a4b48de514aa2dd39145d9a40e4d297154737ada138750f27451a8"} err="failed to get container status \"08b51d7436a4b48de514aa2dd39145d9a40e4d297154737ada138750f27451a8\": rpc error: code = NotFound desc = could not find container \"08b51d7436a4b48de514aa2dd39145d9a40e4d297154737ada138750f27451a8\": container with ID starting with 08b51d7436a4b48de514aa2dd39145d9a40e4d297154737ada138750f27451a8 not found: ID does not exist" Oct 02 12:33:10 crc kubenswrapper[4725]: I1002 12:33:10.022685 4725 scope.go:117] "RemoveContainer" containerID="eeea9bb7a6f1ac7b190798cee0511ad806461742f81305552624faa152905e26" Oct 02 12:33:10 crc kubenswrapper[4725]: E1002 12:33:10.023426 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeea9bb7a6f1ac7b190798cee0511ad806461742f81305552624faa152905e26\": container with ID starting with eeea9bb7a6f1ac7b190798cee0511ad806461742f81305552624faa152905e26 not found: ID does not exist" containerID="eeea9bb7a6f1ac7b190798cee0511ad806461742f81305552624faa152905e26" Oct 02 12:33:10 crc kubenswrapper[4725]: I1002 12:33:10.023455 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeea9bb7a6f1ac7b190798cee0511ad806461742f81305552624faa152905e26"} err="failed to get container status \"eeea9bb7a6f1ac7b190798cee0511ad806461742f81305552624faa152905e26\": rpc error: code = NotFound desc = could not find container \"eeea9bb7a6f1ac7b190798cee0511ad806461742f81305552624faa152905e26\": container with ID starting with eeea9bb7a6f1ac7b190798cee0511ad806461742f81305552624faa152905e26 not found: ID does not exist" Oct 02 12:33:11 crc kubenswrapper[4725]: I1002 12:33:11.283970 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da160180-4f80-49e8-85b4-297a2ef31ff9" path="/var/lib/kubelet/pods/da160180-4f80-49e8-85b4-297a2ef31ff9/volumes" Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.269094 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:33:17 crc kubenswrapper[4725]: E1002 12:33:17.270200 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.291771 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pchnl/must-gather-vzf4d"] Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.292061 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pchnl/must-gather-vzf4d" podUID="71dc8102-d90b-430d-8edd-5661f65956a7" containerName="copy" containerID="cri-o://876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4" gracePeriod=2 Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.314795 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pchnl/must-gather-vzf4d"] Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.719453 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pchnl_must-gather-vzf4d_71dc8102-d90b-430d-8edd-5661f65956a7/copy/0.log" Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.720135 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/must-gather-vzf4d" Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.791524 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85qcx\" (UniqueName: \"kubernetes.io/projected/71dc8102-d90b-430d-8edd-5661f65956a7-kube-api-access-85qcx\") pod \"71dc8102-d90b-430d-8edd-5661f65956a7\" (UID: \"71dc8102-d90b-430d-8edd-5661f65956a7\") " Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.791592 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/71dc8102-d90b-430d-8edd-5661f65956a7-must-gather-output\") pod \"71dc8102-d90b-430d-8edd-5661f65956a7\" (UID: \"71dc8102-d90b-430d-8edd-5661f65956a7\") " Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.798181 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71dc8102-d90b-430d-8edd-5661f65956a7-kube-api-access-85qcx" (OuterVolumeSpecName: "kube-api-access-85qcx") pod "71dc8102-d90b-430d-8edd-5661f65956a7" (UID: "71dc8102-d90b-430d-8edd-5661f65956a7"). InnerVolumeSpecName "kube-api-access-85qcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.893829 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85qcx\" (UniqueName: \"kubernetes.io/projected/71dc8102-d90b-430d-8edd-5661f65956a7-kube-api-access-85qcx\") on node \"crc\" DevicePath \"\"" Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.944212 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71dc8102-d90b-430d-8edd-5661f65956a7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "71dc8102-d90b-430d-8edd-5661f65956a7" (UID: "71dc8102-d90b-430d-8edd-5661f65956a7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.966556 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pchnl_must-gather-vzf4d_71dc8102-d90b-430d-8edd-5661f65956a7/copy/0.log" Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.966980 4725 generic.go:334] "Generic (PLEG): container finished" podID="71dc8102-d90b-430d-8edd-5661f65956a7" containerID="876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4" exitCode=143 Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.967033 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pchnl/must-gather-vzf4d" Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.967054 4725 scope.go:117] "RemoveContainer" containerID="876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4" Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.989169 4725 scope.go:117] "RemoveContainer" containerID="763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103" Oct 02 12:33:17 crc kubenswrapper[4725]: I1002 12:33:17.995455 4725 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/71dc8102-d90b-430d-8edd-5661f65956a7-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 02 12:33:18 crc kubenswrapper[4725]: I1002 12:33:18.068104 4725 scope.go:117] "RemoveContainer" containerID="876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4" Oct 02 12:33:18 crc kubenswrapper[4725]: E1002 12:33:18.068646 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4\": container with ID starting with 876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4 not found: ID does not exist" containerID="876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4" Oct 02 12:33:18 crc kubenswrapper[4725]: I1002 12:33:18.068682 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4"} err="failed to get container status \"876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4\": rpc error: code = NotFound desc = could not find container \"876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4\": container with ID starting with 876a402cb2fbca4f95dd2a023bfee4c7c56187948437fef0f1b995d30b92fbd4 not found: ID does not exist" Oct 02 12:33:18 crc kubenswrapper[4725]: I1002 12:33:18.068707 4725 scope.go:117] "RemoveContainer" containerID="763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103" Oct 02 12:33:18 crc kubenswrapper[4725]: E1002 12:33:18.068995 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103\": container with ID starting with 763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103 not found: ID does not exist" containerID="763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103" Oct 02 12:33:18 crc kubenswrapper[4725]: I1002 12:33:18.069015 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103"} err="failed to get container status \"763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103\": rpc error: code = NotFound desc = could not find container \"763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103\": container with ID starting with 763dcf0da9ac010ac9091f57d98667351b1fb35d2217b954dc357818c953c103 not found: ID does not exist" Oct 02 12:33:19 crc kubenswrapper[4725]: I1002 12:33:19.282085 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71dc8102-d90b-430d-8edd-5661f65956a7" path="/var/lib/kubelet/pods/71dc8102-d90b-430d-8edd-5661f65956a7/volumes" Oct 02 12:33:30 crc kubenswrapper[4725]: I1002 12:33:30.268363 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:33:30 crc kubenswrapper[4725]: E1002 12:33:30.269308 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:33:42 crc kubenswrapper[4725]: I1002 12:33:42.269279 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:33:42 crc kubenswrapper[4725]: E1002 12:33:42.270692 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.721232 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6wlv8/must-gather-m8fzk"] Oct 02 12:33:52 crc kubenswrapper[4725]: E1002 12:33:52.722126 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da160180-4f80-49e8-85b4-297a2ef31ff9" containerName="registry-server" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.722140 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="da160180-4f80-49e8-85b4-297a2ef31ff9" containerName="registry-server" Oct 02 12:33:52 crc kubenswrapper[4725]: E1002 12:33:52.722172 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dc8102-d90b-430d-8edd-5661f65956a7" containerName="gather" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.722178 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dc8102-d90b-430d-8edd-5661f65956a7" containerName="gather" Oct 02 12:33:52 crc kubenswrapper[4725]: E1002 12:33:52.722198 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dc8102-d90b-430d-8edd-5661f65956a7" containerName="copy" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.722204 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dc8102-d90b-430d-8edd-5661f65956a7" containerName="copy" Oct 02 12:33:52 crc kubenswrapper[4725]: E1002 12:33:52.722214 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da160180-4f80-49e8-85b4-297a2ef31ff9" containerName="extract-utilities" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.722221 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="da160180-4f80-49e8-85b4-297a2ef31ff9" containerName="extract-utilities" Oct 02 12:33:52 crc kubenswrapper[4725]: E1002 12:33:52.722232 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da160180-4f80-49e8-85b4-297a2ef31ff9" containerName="extract-content" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.722238 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="da160180-4f80-49e8-85b4-297a2ef31ff9" containerName="extract-content" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.722414 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="da160180-4f80-49e8-85b4-297a2ef31ff9" containerName="registry-server" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.722433 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="71dc8102-d90b-430d-8edd-5661f65956a7" containerName="gather" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.722450 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="71dc8102-d90b-430d-8edd-5661f65956a7" containerName="copy" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.723366 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/must-gather-m8fzk" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.729572 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6wlv8"/"default-dockercfg-vhlrq" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.730740 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6wlv8"/"kube-root-ca.crt" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.731023 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6wlv8"/"openshift-service-ca.crt" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.731022 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6wlv8/must-gather-m8fzk"] Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.830637 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zvp5\" (UniqueName: \"kubernetes.io/projected/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-kube-api-access-9zvp5\") pod \"must-gather-m8fzk\" (UID: \"980815d8-d8e4-489d-a7fc-5cd7d48f5df7\") " pod="openshift-must-gather-6wlv8/must-gather-m8fzk" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.830742 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-must-gather-output\") pod \"must-gather-m8fzk\" (UID: \"980815d8-d8e4-489d-a7fc-5cd7d48f5df7\") " pod="openshift-must-gather-6wlv8/must-gather-m8fzk" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.933445 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zvp5\" (UniqueName: \"kubernetes.io/projected/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-kube-api-access-9zvp5\") pod \"must-gather-m8fzk\" (UID: \"980815d8-d8e4-489d-a7fc-5cd7d48f5df7\") " pod="openshift-must-gather-6wlv8/must-gather-m8fzk" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.934291 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-must-gather-output\") pod \"must-gather-m8fzk\" (UID: \"980815d8-d8e4-489d-a7fc-5cd7d48f5df7\") " pod="openshift-must-gather-6wlv8/must-gather-m8fzk" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.934883 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-must-gather-output\") pod \"must-gather-m8fzk\" (UID: \"980815d8-d8e4-489d-a7fc-5cd7d48f5df7\") " pod="openshift-must-gather-6wlv8/must-gather-m8fzk" Oct 02 12:33:52 crc kubenswrapper[4725]: I1002 12:33:52.971517 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zvp5\" (UniqueName: \"kubernetes.io/projected/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-kube-api-access-9zvp5\") pod \"must-gather-m8fzk\" (UID: \"980815d8-d8e4-489d-a7fc-5cd7d48f5df7\") " pod="openshift-must-gather-6wlv8/must-gather-m8fzk" Oct 02 12:33:53 crc kubenswrapper[4725]: I1002 12:33:53.040845 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/must-gather-m8fzk" Oct 02 12:33:53 crc kubenswrapper[4725]: I1002 12:33:53.587042 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6wlv8/must-gather-m8fzk"] Oct 02 12:33:54 crc kubenswrapper[4725]: I1002 12:33:54.368443 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/must-gather-m8fzk" event={"ID":"980815d8-d8e4-489d-a7fc-5cd7d48f5df7","Type":"ContainerStarted","Data":"385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e"} Oct 02 12:33:54 crc kubenswrapper[4725]: I1002 12:33:54.368788 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/must-gather-m8fzk" event={"ID":"980815d8-d8e4-489d-a7fc-5cd7d48f5df7","Type":"ContainerStarted","Data":"529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18"} Oct 02 12:33:54 crc kubenswrapper[4725]: I1002 12:33:54.368805 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/must-gather-m8fzk" event={"ID":"980815d8-d8e4-489d-a7fc-5cd7d48f5df7","Type":"ContainerStarted","Data":"93204454dc4c7787d5b1475fdb2c6062fcd263e593ad822c34d27f7f89b68696"} Oct 02 12:33:54 crc kubenswrapper[4725]: I1002 12:33:54.393761 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6wlv8/must-gather-m8fzk" podStartSLOduration=2.3937419 podStartE2EDuration="2.3937419s" podCreationTimestamp="2025-10-02 12:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 12:33:54.386064945 +0000 UTC m=+3954.293564428" watchObservedRunningTime="2025-10-02 12:33:54.3937419 +0000 UTC m=+3954.301241363" Oct 02 12:33:57 crc kubenswrapper[4725]: I1002 12:33:57.268659 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:33:57 crc kubenswrapper[4725]: E1002 12:33:57.271046 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:33:57 crc kubenswrapper[4725]: I1002 12:33:57.342283 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6wlv8/crc-debug-m5n2x"] Oct 02 12:33:57 crc kubenswrapper[4725]: I1002 12:33:57.344625 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" Oct 02 12:33:57 crc kubenswrapper[4725]: I1002 12:33:57.436965 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97e22cad-0329-4d10-83e0-7872bc60a935-host\") pod \"crc-debug-m5n2x\" (UID: \"97e22cad-0329-4d10-83e0-7872bc60a935\") " pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" Oct 02 12:33:57 crc kubenswrapper[4725]: I1002 12:33:57.437084 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj458\" (UniqueName: \"kubernetes.io/projected/97e22cad-0329-4d10-83e0-7872bc60a935-kube-api-access-wj458\") pod \"crc-debug-m5n2x\" (UID: \"97e22cad-0329-4d10-83e0-7872bc60a935\") " pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" Oct 02 12:33:57 crc kubenswrapper[4725]: I1002 12:33:57.538971 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97e22cad-0329-4d10-83e0-7872bc60a935-host\") pod \"crc-debug-m5n2x\" (UID: \"97e22cad-0329-4d10-83e0-7872bc60a935\") " pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" Oct 02 12:33:57 crc kubenswrapper[4725]: I1002 12:33:57.539067 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97e22cad-0329-4d10-83e0-7872bc60a935-host\") pod \"crc-debug-m5n2x\" (UID: \"97e22cad-0329-4d10-83e0-7872bc60a935\") " pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" Oct 02 12:33:57 crc kubenswrapper[4725]: I1002 12:33:57.539092 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj458\" (UniqueName: \"kubernetes.io/projected/97e22cad-0329-4d10-83e0-7872bc60a935-kube-api-access-wj458\") pod \"crc-debug-m5n2x\" (UID: \"97e22cad-0329-4d10-83e0-7872bc60a935\") " pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" Oct 02 12:33:57 crc kubenswrapper[4725]: I1002 12:33:57.561810 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj458\" (UniqueName: \"kubernetes.io/projected/97e22cad-0329-4d10-83e0-7872bc60a935-kube-api-access-wj458\") pod \"crc-debug-m5n2x\" (UID: \"97e22cad-0329-4d10-83e0-7872bc60a935\") " pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" Oct 02 12:33:57 crc kubenswrapper[4725]: I1002 12:33:57.680189 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" Oct 02 12:33:58 crc kubenswrapper[4725]: I1002 12:33:58.404216 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" event={"ID":"97e22cad-0329-4d10-83e0-7872bc60a935","Type":"ContainerStarted","Data":"5b6c72f6cddec5705b92f6b3ad91fcb9cbd0878722631e87f5056d6f184ed1f7"} Oct 02 12:33:58 crc kubenswrapper[4725]: I1002 12:33:58.404758 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" event={"ID":"97e22cad-0329-4d10-83e0-7872bc60a935","Type":"ContainerStarted","Data":"07f8284d6d97048649927bcfdd1346f64ce42c86b8390077bdac39fbf5b6ef8d"} Oct 02 12:33:58 crc kubenswrapper[4725]: I1002 12:33:58.425822 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" podStartSLOduration=1.425803437 podStartE2EDuration="1.425803437s" podCreationTimestamp="2025-10-02 12:33:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 12:33:58.417145205 +0000 UTC m=+3958.324644668" watchObservedRunningTime="2025-10-02 12:33:58.425803437 +0000 UTC m=+3958.333302900" Oct 02 12:34:09 crc kubenswrapper[4725]: I1002 12:34:09.269398 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:34:09 crc kubenswrapper[4725]: E1002 12:34:09.269979 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:34:23 crc kubenswrapper[4725]: I1002 12:34:23.268711 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:34:23 crc kubenswrapper[4725]: E1002 12:34:23.269369 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:34:37 crc kubenswrapper[4725]: I1002 12:34:37.268418 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:34:37 crc kubenswrapper[4725]: E1002 12:34:37.269271 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:34:51 crc kubenswrapper[4725]: I1002 12:34:51.275911 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:34:51 crc kubenswrapper[4725]: E1002 12:34:51.276874 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:35:00 crc kubenswrapper[4725]: I1002 12:35:00.774564 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d64c5b6c4-wjr9t_d2ef4726-c6b9-4bb3-909d-af176b24f2c8/barbican-api-log/0.log" Oct 02 12:35:00 crc kubenswrapper[4725]: I1002 12:35:00.787811 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d64c5b6c4-wjr9t_d2ef4726-c6b9-4bb3-909d-af176b24f2c8/barbican-api/0.log" Oct 02 12:35:00 crc kubenswrapper[4725]: I1002 12:35:00.986096 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-796f86f598-mgjgh_7a8e9323-4d6b-4015-80bb-5d2752bfd94c/barbican-keystone-listener/0.log" Oct 02 12:35:01 crc kubenswrapper[4725]: I1002 12:35:01.186788 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-796f86f598-mgjgh_7a8e9323-4d6b-4015-80bb-5d2752bfd94c/barbican-keystone-listener-log/0.log" Oct 02 12:35:01 crc kubenswrapper[4725]: I1002 12:35:01.234676 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5798d58dff-jkj6h_2ba9160b-539e-40a1-8d2f-4cb0f25e4084/barbican-worker/0.log" Oct 02 12:35:01 crc kubenswrapper[4725]: I1002 12:35:01.563044 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5798d58dff-jkj6h_2ba9160b-539e-40a1-8d2f-4cb0f25e4084/barbican-worker-log/0.log" Oct 02 12:35:01 crc kubenswrapper[4725]: I1002 12:35:01.585426 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vmgq5_58fb4d5d-e01c-4ede-91c8-9674a71c34a1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:01 crc kubenswrapper[4725]: I1002 12:35:01.841807 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930/proxy-httpd/0.log" Oct 02 12:35:01 crc kubenswrapper[4725]: I1002 12:35:01.984182 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930/ceilometer-notification-agent/0.log" Oct 02 12:35:01 crc kubenswrapper[4725]: I1002 12:35:01.988814 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930/ceilometer-central-agent/0.log" Oct 02 12:35:02 crc kubenswrapper[4725]: I1002 12:35:02.028304 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9643e3b3-a428-4bd6-aa1d-9dfcb1eb1930/sg-core/0.log" Oct 02 12:35:02 crc kubenswrapper[4725]: I1002 12:35:02.206907 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9ced9aab-c4e7-4463-9d29-d32521d07220/cinder-api-log/0.log" Oct 02 12:35:02 crc kubenswrapper[4725]: I1002 12:35:02.491648 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9ced9aab-c4e7-4463-9d29-d32521d07220/cinder-api/0.log" Oct 02 12:35:02 crc kubenswrapper[4725]: I1002 12:35:02.507300 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_58f46069-09a8-4501-95a3-70b3d03ee211/cinder-scheduler/0.log" Oct 02 12:35:02 crc kubenswrapper[4725]: I1002 12:35:02.643608 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_58f46069-09a8-4501-95a3-70b3d03ee211/probe/0.log" Oct 02 12:35:02 crc kubenswrapper[4725]: I1002 12:35:02.761651 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-f5ps2_15f9480b-ec9f-48d1-9778-1376f2c1245e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:03 crc kubenswrapper[4725]: I1002 12:35:03.177820 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-f4j9r_b1d44487-97d6-4e7d-856d-61aec07be83c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:03 crc kubenswrapper[4725]: I1002 12:35:03.181949 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-z874b_2864a400-a21a-4c43-b078-16fece86e8fb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:03 crc kubenswrapper[4725]: I1002 12:35:03.413890 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-xrt5j_d151b71f-87ba-40a0-8858-99f129ac1e55/init/0.log" Oct 02 12:35:03 crc kubenswrapper[4725]: I1002 12:35:03.472146 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-xrt5j_d151b71f-87ba-40a0-8858-99f129ac1e55/init/0.log" Oct 02 12:35:03 crc kubenswrapper[4725]: I1002 12:35:03.562977 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-xrt5j_d151b71f-87ba-40a0-8858-99f129ac1e55/dnsmasq-dns/0.log" Oct 02 12:35:03 crc kubenswrapper[4725]: I1002 12:35:03.646632 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-zh2n9_05b5e1c6-efe9-4a6f-a623-c058ae2e301a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:03 crc kubenswrapper[4725]: I1002 12:35:03.730989 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_138ed7a6-24d0-4071-b142-ece9a296eb65/glance-log/0.log" Oct 02 12:35:03 crc kubenswrapper[4725]: I1002 12:35:03.807938 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_138ed7a6-24d0-4071-b142-ece9a296eb65/glance-httpd/0.log" Oct 02 12:35:03 crc kubenswrapper[4725]: I1002 12:35:03.980618 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_55c8573f-3cb6-4d8c-8b84-dfa5f6221f42/glance-log/0.log" Oct 02 12:35:04 crc kubenswrapper[4725]: I1002 12:35:04.114989 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_55c8573f-3cb6-4d8c-8b84-dfa5f6221f42/glance-httpd/0.log" Oct 02 12:35:04 crc kubenswrapper[4725]: I1002 12:35:04.392340 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b797cdcc6-7cf2m_9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8/horizon/0.log" Oct 02 12:35:04 crc kubenswrapper[4725]: I1002 12:35:04.436799 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-hgh68_72c6c1e1-6428-4023-b56e-ee525bc50c65/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:04 crc kubenswrapper[4725]: I1002 12:35:04.726808 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b797cdcc6-7cf2m_9ca3c3ea-a0b4-4911-8305-aa10fc46f1c8/horizon-log/0.log" Oct 02 12:35:04 crc kubenswrapper[4725]: I1002 12:35:04.733852 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-nq8fs_25f013e8-9c08-40f3-84d8-2ddcb5528f44/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:04 crc kubenswrapper[4725]: I1002 12:35:04.876444 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29323441-5xvm4_7aa98170-54f3-4694-95d1-22b25f1512ba/keystone-cron/0.log" Oct 02 12:35:05 crc kubenswrapper[4725]: I1002 12:35:05.105145 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-f5bf68656-dnz2c_7458e87c-8d2c-4e87-9577-c718b49f9e85/keystone-api/0.log" Oct 02 12:35:05 crc kubenswrapper[4725]: I1002 12:35:05.111953 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e4b7ce88-f603-426c-9af7-b2cccde7469d/kube-state-metrics/0.log" Oct 02 12:35:05 crc kubenswrapper[4725]: I1002 12:35:05.223826 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5h5xt_c525e8cd-4d87-4d2b-9d78-73199eebbbee/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:05 crc kubenswrapper[4725]: I1002 12:35:05.612481 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-855d67b977-b45rh_714afd76-15e2-4584-a68c-50f3d524f3da/neutron-httpd/0.log" Oct 02 12:35:05 crc kubenswrapper[4725]: I1002 12:35:05.661614 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-855d67b977-b45rh_714afd76-15e2-4584-a68c-50f3d524f3da/neutron-api/0.log" Oct 02 12:35:05 crc kubenswrapper[4725]: I1002 12:35:05.820396 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m2sgc_c4d85400-0823-4e3c-b7b2-f7902c817c33/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:06 crc kubenswrapper[4725]: I1002 12:35:06.268157 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:35:06 crc kubenswrapper[4725]: E1002 12:35:06.268407 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:35:06 crc kubenswrapper[4725]: I1002 12:35:06.429283 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_91849e10-6e8e-466f-a603-1c15622941c6/nova-api-log/0.log" Oct 02 12:35:06 crc kubenswrapper[4725]: I1002 12:35:06.559612 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bdeadf2b-92a4-44ae-803a-493d9ef4a7c2/nova-cell0-conductor-conductor/0.log" Oct 02 12:35:06 crc kubenswrapper[4725]: I1002 12:35:06.753557 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_91849e10-6e8e-466f-a603-1c15622941c6/nova-api-api/0.log" Oct 02 12:35:06 crc kubenswrapper[4725]: I1002 12:35:06.913321 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_431a2433-2959-4ab9-a6ed-2dc9dc8ef55a/nova-cell1-conductor-conductor/0.log" Oct 02 12:35:07 crc kubenswrapper[4725]: I1002 12:35:07.569688 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-jxhn6_e1ff53ac-ab56-4b7f-99b1-4ed1c299af72/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:07 crc kubenswrapper[4725]: I1002 12:35:07.642216 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_eb9c8f07-9e51-46ad-87b1-a71668a04d3d/nova-cell1-novncproxy-novncproxy/0.log" Oct 02 12:35:07 crc kubenswrapper[4725]: I1002 12:35:07.935807 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_be5ed584-4418-4447-8ed6-2e89c70e903b/nova-metadata-log/0.log" Oct 02 12:35:08 crc kubenswrapper[4725]: I1002 12:35:08.425063 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_19a3659e-e721-4f41-932d-978e69b77755/nova-scheduler-scheduler/0.log" Oct 02 12:35:09 crc kubenswrapper[4725]: I1002 12:35:09.099136 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d9085121-a59b-4dbc-95fa-2a61f0432970/mysql-bootstrap/0.log" Oct 02 12:35:09 crc kubenswrapper[4725]: I1002 12:35:09.314069 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d9085121-a59b-4dbc-95fa-2a61f0432970/mysql-bootstrap/0.log" Oct 02 12:35:09 crc kubenswrapper[4725]: I1002 12:35:09.358659 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d9085121-a59b-4dbc-95fa-2a61f0432970/galera/0.log" Oct 02 12:35:09 crc kubenswrapper[4725]: I1002 12:35:09.431877 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_be5ed584-4418-4447-8ed6-2e89c70e903b/nova-metadata-metadata/0.log" Oct 02 12:35:09 crc kubenswrapper[4725]: I1002 12:35:09.573568 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b64d7b7-832c-4a08-96e5-27fcd2c01988/mysql-bootstrap/0.log" Oct 02 12:35:09 crc kubenswrapper[4725]: I1002 12:35:09.848274 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b64d7b7-832c-4a08-96e5-27fcd2c01988/galera/0.log" Oct 02 12:35:09 crc kubenswrapper[4725]: I1002 12:35:09.850114 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b64d7b7-832c-4a08-96e5-27fcd2c01988/mysql-bootstrap/0.log" Oct 02 12:35:10 crc kubenswrapper[4725]: I1002 12:35:10.026541 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-98gqf_ba80438e-e220-487f-b365-27a8224c7ef2/ovn-controller/0.log" Oct 02 12:35:10 crc kubenswrapper[4725]: I1002 12:35:10.028012 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a694a92f-563d-41d0-908e-744aec98dd01/openstackclient/0.log" Oct 02 12:35:10 crc kubenswrapper[4725]: I1002 12:35:10.249014 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2mzwk_d765fdd7-c196-4fdc-b5ae-813c10a8bd2b/openstack-network-exporter/0.log" Oct 02 12:35:10 crc kubenswrapper[4725]: I1002 12:35:10.446718 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b2r45_f4f0e1eb-7dd0-4937-a789-b9edb4de3ade/ovsdb-server-init/0.log" Oct 02 12:35:10 crc kubenswrapper[4725]: I1002 12:35:10.682875 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b2r45_f4f0e1eb-7dd0-4937-a789-b9edb4de3ade/ovsdb-server/0.log" Oct 02 12:35:10 crc kubenswrapper[4725]: I1002 12:35:10.695016 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b2r45_f4f0e1eb-7dd0-4937-a789-b9edb4de3ade/ovs-vswitchd/0.log" Oct 02 12:35:10 crc kubenswrapper[4725]: I1002 12:35:10.752943 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b2r45_f4f0e1eb-7dd0-4937-a789-b9edb4de3ade/ovsdb-server-init/0.log" Oct 02 12:35:10 crc kubenswrapper[4725]: I1002 12:35:10.953122 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ctk5t_db895bfa-9a45-45f3-8214-cf8c9e1a1351/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:11 crc kubenswrapper[4725]: I1002 12:35:11.166559 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041/ovn-northd/0.log" Oct 02 12:35:11 crc kubenswrapper[4725]: I1002 12:35:11.205214 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6caaf79d-2eb4-4a9a-98f1-4cbbd69a8041/openstack-network-exporter/0.log" Oct 02 12:35:11 crc kubenswrapper[4725]: I1002 12:35:11.386862 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a3c790f7-722b-4693-a9cc-ba649c5833ca/openstack-network-exporter/0.log" Oct 02 12:35:11 crc kubenswrapper[4725]: I1002 12:35:11.432514 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a3c790f7-722b-4693-a9cc-ba649c5833ca/ovsdbserver-nb/0.log" Oct 02 12:35:11 crc kubenswrapper[4725]: I1002 12:35:11.761908 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f8dd7ed6-4794-4bf8-8d40-8bb837848eed/ovsdbserver-sb/0.log" Oct 02 12:35:11 crc kubenswrapper[4725]: I1002 12:35:11.775297 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f8dd7ed6-4794-4bf8-8d40-8bb837848eed/openstack-network-exporter/0.log" Oct 02 12:35:11 crc kubenswrapper[4725]: I1002 12:35:11.998083 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9b656dd8b-n4tcm_7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a/placement-api/0.log" Oct 02 12:35:12 crc kubenswrapper[4725]: I1002 12:35:12.099247 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9b656dd8b-n4tcm_7ae7c65d-11b4-4dd2-8b47-fd5ff90a1a5a/placement-log/0.log" Oct 02 12:35:12 crc kubenswrapper[4725]: I1002 12:35:12.276228 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bc55a26c-8109-4994-812d-1dd87f46d791/setup-container/0.log" Oct 02 12:35:12 crc kubenswrapper[4725]: I1002 12:35:12.477022 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bc55a26c-8109-4994-812d-1dd87f46d791/rabbitmq/0.log" Oct 02 12:35:12 crc kubenswrapper[4725]: I1002 12:35:12.480241 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bc55a26c-8109-4994-812d-1dd87f46d791/setup-container/0.log" Oct 02 12:35:12 crc kubenswrapper[4725]: I1002 12:35:12.638821 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a1b44c9c-40f8-4c5e-8616-76e24df2ee97/setup-container/0.log" Oct 02 12:35:12 crc kubenswrapper[4725]: I1002 12:35:12.822527 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a1b44c9c-40f8-4c5e-8616-76e24df2ee97/setup-container/0.log" Oct 02 12:35:12 crc kubenswrapper[4725]: I1002 12:35:12.879914 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a1b44c9c-40f8-4c5e-8616-76e24df2ee97/rabbitmq/0.log" Oct 02 12:35:13 crc kubenswrapper[4725]: I1002 12:35:13.020764 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-472wk_e04f0ae5-20a3-47c1-a877-d717d8d7feb8/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:13 crc kubenswrapper[4725]: I1002 12:35:13.106278 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-qt8kt_c238f16b-d636-421b-bbbf-53870c63c217/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:13 crc kubenswrapper[4725]: I1002 12:35:13.348814 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8l8lq_d070422a-2b6f-42b9-8765-6f630ad4b68f/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:13 crc kubenswrapper[4725]: I1002 12:35:13.492628 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-97mpr_058f1e15-ad8b-4b61-a8e9-f98422ba2151/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:13 crc kubenswrapper[4725]: I1002 12:35:13.584439 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9kgwd_3377a61c-191d-4a8c-a342-9556746ea6e0/ssh-known-hosts-edpm-deployment/0.log" Oct 02 12:35:13 crc kubenswrapper[4725]: I1002 12:35:13.835109 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76bb8577f-p858j_92f1433d-ba22-410b-b18f-b048e5ac47a7/proxy-server/0.log" Oct 02 12:35:13 crc kubenswrapper[4725]: I1002 12:35:13.946412 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76bb8577f-p858j_92f1433d-ba22-410b-b18f-b048e5ac47a7/proxy-httpd/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.082358 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-zsfz8_66f1562e-003f-4f29-a7ba-2c42b823662e/swift-ring-rebalance/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.187899 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/account-auditor/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.266529 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/account-reaper/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.374612 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/account-replicator/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.442193 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/account-server/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.481303 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/container-auditor/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.638797 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/container-replicator/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.655998 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/container-server/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.729547 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/container-updater/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.864238 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/object-expirer/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.876173 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/object-auditor/0.log" Oct 02 12:35:14 crc kubenswrapper[4725]: I1002 12:35:14.967570 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/object-replicator/0.log" Oct 02 12:35:15 crc kubenswrapper[4725]: I1002 12:35:15.061514 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/object-updater/0.log" Oct 02 12:35:15 crc kubenswrapper[4725]: I1002 12:35:15.076369 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/object-server/0.log" Oct 02 12:35:15 crc kubenswrapper[4725]: I1002 12:35:15.154100 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/rsync/0.log" Oct 02 12:35:15 crc kubenswrapper[4725]: I1002 12:35:15.509151 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e1fb73ad-22b0-46f9-a5c0-9faba5acb82d/swift-recon-cron/0.log" Oct 02 12:35:15 crc kubenswrapper[4725]: I1002 12:35:15.607204 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-csk6j_b3aedfea-069e-4211-878c-b85e0bb9d3ac/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:15 crc kubenswrapper[4725]: I1002 12:35:15.760032 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0e4dadf0-6f31-4e9d-8590-5324693180b6/tempest-tests-tempest-tests-runner/0.log" Oct 02 12:35:16 crc kubenswrapper[4725]: I1002 12:35:16.532137 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7934978f-21e8-4a23-adfb-b9e97b479458/test-operator-logs-container/0.log" Oct 02 12:35:16 crc kubenswrapper[4725]: I1002 12:35:16.759060 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6t5ls_b10901fd-a5d7-431f-a105-ff03a7554335/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 12:35:21 crc kubenswrapper[4725]: I1002 12:35:21.274785 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:35:21 crc kubenswrapper[4725]: E1002 12:35:21.275423 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:35:25 crc kubenswrapper[4725]: I1002 12:35:25.170284 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3dd85e23-2a8f-404e-97a5-1ce7cf4a33d4/memcached/0.log" Oct 02 12:35:35 crc kubenswrapper[4725]: I1002 12:35:35.272531 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:35:35 crc kubenswrapper[4725]: E1002 12:35:35.273406 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:35:38 crc kubenswrapper[4725]: I1002 12:35:38.268978 4725 scope.go:117] "RemoveContainer" containerID="64f6d7a637590ac2cc519e15f1000680314e88959ac650e428887572d93521f7" Oct 02 12:35:47 crc kubenswrapper[4725]: I1002 12:35:47.276803 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:35:47 crc kubenswrapper[4725]: E1002 12:35:47.278046 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:35:49 crc kubenswrapper[4725]: I1002 12:35:49.448340 4725 generic.go:334] "Generic (PLEG): container finished" podID="97e22cad-0329-4d10-83e0-7872bc60a935" containerID="5b6c72f6cddec5705b92f6b3ad91fcb9cbd0878722631e87f5056d6f184ed1f7" exitCode=0 Oct 02 12:35:49 crc kubenswrapper[4725]: I1002 12:35:49.448462 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" event={"ID":"97e22cad-0329-4d10-83e0-7872bc60a935","Type":"ContainerDied","Data":"5b6c72f6cddec5705b92f6b3ad91fcb9cbd0878722631e87f5056d6f184ed1f7"} Oct 02 12:35:50 crc kubenswrapper[4725]: I1002 12:35:50.570716 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" Oct 02 12:35:50 crc kubenswrapper[4725]: I1002 12:35:50.603470 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6wlv8/crc-debug-m5n2x"] Oct 02 12:35:50 crc kubenswrapper[4725]: I1002 12:35:50.611323 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6wlv8/crc-debug-m5n2x"] Oct 02 12:35:50 crc kubenswrapper[4725]: I1002 12:35:50.638845 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj458\" (UniqueName: \"kubernetes.io/projected/97e22cad-0329-4d10-83e0-7872bc60a935-kube-api-access-wj458\") pod \"97e22cad-0329-4d10-83e0-7872bc60a935\" (UID: \"97e22cad-0329-4d10-83e0-7872bc60a935\") " Oct 02 12:35:50 crc kubenswrapper[4725]: I1002 12:35:50.638888 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97e22cad-0329-4d10-83e0-7872bc60a935-host\") pod \"97e22cad-0329-4d10-83e0-7872bc60a935\" (UID: \"97e22cad-0329-4d10-83e0-7872bc60a935\") " Oct 02 12:35:50 crc kubenswrapper[4725]: I1002 12:35:50.639148 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97e22cad-0329-4d10-83e0-7872bc60a935-host" (OuterVolumeSpecName: "host") pod "97e22cad-0329-4d10-83e0-7872bc60a935" (UID: "97e22cad-0329-4d10-83e0-7872bc60a935"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 12:35:50 crc kubenswrapper[4725]: I1002 12:35:50.639460 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97e22cad-0329-4d10-83e0-7872bc60a935-host\") on node \"crc\" DevicePath \"\"" Oct 02 12:35:50 crc kubenswrapper[4725]: I1002 12:35:50.644542 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e22cad-0329-4d10-83e0-7872bc60a935-kube-api-access-wj458" (OuterVolumeSpecName: "kube-api-access-wj458") pod "97e22cad-0329-4d10-83e0-7872bc60a935" (UID: "97e22cad-0329-4d10-83e0-7872bc60a935"). InnerVolumeSpecName "kube-api-access-wj458". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:35:50 crc kubenswrapper[4725]: I1002 12:35:50.741474 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj458\" (UniqueName: \"kubernetes.io/projected/97e22cad-0329-4d10-83e0-7872bc60a935-kube-api-access-wj458\") on node \"crc\" DevicePath \"\"" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.292201 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e22cad-0329-4d10-83e0-7872bc60a935" path="/var/lib/kubelet/pods/97e22cad-0329-4d10-83e0-7872bc60a935/volumes" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.474393 4725 scope.go:117] "RemoveContainer" containerID="5b6c72f6cddec5705b92f6b3ad91fcb9cbd0878722631e87f5056d6f184ed1f7" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.474415 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-m5n2x" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.822469 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6wlv8/crc-debug-jlqtf"] Oct 02 12:35:51 crc kubenswrapper[4725]: E1002 12:35:51.823151 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e22cad-0329-4d10-83e0-7872bc60a935" containerName="container-00" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.823172 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e22cad-0329-4d10-83e0-7872bc60a935" containerName="container-00" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.823598 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e22cad-0329-4d10-83e0-7872bc60a935" containerName="container-00" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.825520 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.862158 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfrsb\" (UniqueName: \"kubernetes.io/projected/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-kube-api-access-cfrsb\") pod \"crc-debug-jlqtf\" (UID: \"bf9b1af5-3075-45ef-8a53-76a4f3a254c7\") " pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.862502 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-host\") pod \"crc-debug-jlqtf\" (UID: \"bf9b1af5-3075-45ef-8a53-76a4f3a254c7\") " pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.963889 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfrsb\" (UniqueName: \"kubernetes.io/projected/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-kube-api-access-cfrsb\") pod \"crc-debug-jlqtf\" (UID: \"bf9b1af5-3075-45ef-8a53-76a4f3a254c7\") " pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.964020 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-host\") pod \"crc-debug-jlqtf\" (UID: \"bf9b1af5-3075-45ef-8a53-76a4f3a254c7\") " pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.964133 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-host\") pod \"crc-debug-jlqtf\" (UID: \"bf9b1af5-3075-45ef-8a53-76a4f3a254c7\") " pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" Oct 02 12:35:51 crc kubenswrapper[4725]: I1002 12:35:51.983402 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfrsb\" (UniqueName: \"kubernetes.io/projected/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-kube-api-access-cfrsb\") pod \"crc-debug-jlqtf\" (UID: \"bf9b1af5-3075-45ef-8a53-76a4f3a254c7\") " pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" Oct 02 12:35:52 crc kubenswrapper[4725]: I1002 12:35:52.150830 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" Oct 02 12:35:52 crc kubenswrapper[4725]: I1002 12:35:52.492589 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" event={"ID":"bf9b1af5-3075-45ef-8a53-76a4f3a254c7","Type":"ContainerStarted","Data":"c3d6215c416ac0bfeecee8c0e0d7336c6d0e1b0ae9b906bd99c8137cbebff374"} Oct 02 12:35:53 crc kubenswrapper[4725]: E1002 12:35:53.477634 4725 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = container is not running" containerID="1a152094bab24b71d61bffd6fcb9124da5ff74d97a2422a5315c4b74c33892f0" Oct 02 12:35:53 crc kubenswrapper[4725]: E1002 12:35:53.477822 4725 container_log_manager.go:307] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/openshift-must-gather-6wlv8_crc-debug-jlqtf_bf9b1af5-3075-45ef-8a53-76a4f3a254c7/container-00/0.log\": failed to reopen container log \"1a152094bab24b71d61bffd6fcb9124da5ff74d97a2422a5315c4b74c33892f0\": rpc error: code = Unknown desc = container is not running" worker=1 containerID="1a152094bab24b71d61bffd6fcb9124da5ff74d97a2422a5315c4b74c33892f0" path="/var/log/pods/openshift-must-gather-6wlv8_crc-debug-jlqtf_bf9b1af5-3075-45ef-8a53-76a4f3a254c7/container-00/0.log" currentSize=53649591 maxSize=52428800 Oct 02 12:35:53 crc kubenswrapper[4725]: I1002 12:35:53.514821 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" event={"ID":"bf9b1af5-3075-45ef-8a53-76a4f3a254c7","Type":"ContainerStarted","Data":"1a152094bab24b71d61bffd6fcb9124da5ff74d97a2422a5315c4b74c33892f0"} Oct 02 12:35:53 crc kubenswrapper[4725]: I1002 12:35:53.543399 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" podStartSLOduration=2.543376335 podStartE2EDuration="2.543376335s" podCreationTimestamp="2025-10-02 12:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 12:35:53.532234456 +0000 UTC m=+4073.439733939" watchObservedRunningTime="2025-10-02 12:35:53.543376335 +0000 UTC m=+4073.450875818" Oct 02 12:35:56 crc kubenswrapper[4725]: I1002 12:35:56.541187 4725 generic.go:334] "Generic (PLEG): container finished" podID="bf9b1af5-3075-45ef-8a53-76a4f3a254c7" containerID="1a152094bab24b71d61bffd6fcb9124da5ff74d97a2422a5315c4b74c33892f0" exitCode=0 Oct 02 12:35:56 crc kubenswrapper[4725]: I1002 12:35:56.541268 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" event={"ID":"bf9b1af5-3075-45ef-8a53-76a4f3a254c7","Type":"ContainerDied","Data":"1a152094bab24b71d61bffd6fcb9124da5ff74d97a2422a5315c4b74c33892f0"} Oct 02 12:35:58 crc kubenswrapper[4725]: I1002 12:35:58.065276 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" Oct 02 12:35:58 crc kubenswrapper[4725]: I1002 12:35:58.177700 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfrsb\" (UniqueName: \"kubernetes.io/projected/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-kube-api-access-cfrsb\") pod \"bf9b1af5-3075-45ef-8a53-76a4f3a254c7\" (UID: \"bf9b1af5-3075-45ef-8a53-76a4f3a254c7\") " Oct 02 12:35:58 crc kubenswrapper[4725]: I1002 12:35:58.177753 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-host\") pod \"bf9b1af5-3075-45ef-8a53-76a4f3a254c7\" (UID: \"bf9b1af5-3075-45ef-8a53-76a4f3a254c7\") " Oct 02 12:35:58 crc kubenswrapper[4725]: I1002 12:35:58.177879 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-host" (OuterVolumeSpecName: "host") pod "bf9b1af5-3075-45ef-8a53-76a4f3a254c7" (UID: "bf9b1af5-3075-45ef-8a53-76a4f3a254c7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 12:35:58 crc kubenswrapper[4725]: I1002 12:35:58.178155 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-host\") on node \"crc\" DevicePath \"\"" Oct 02 12:35:58 crc kubenswrapper[4725]: I1002 12:35:58.182860 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-kube-api-access-cfrsb" (OuterVolumeSpecName: "kube-api-access-cfrsb") pod "bf9b1af5-3075-45ef-8a53-76a4f3a254c7" (UID: "bf9b1af5-3075-45ef-8a53-76a4f3a254c7"). InnerVolumeSpecName "kube-api-access-cfrsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:35:58 crc kubenswrapper[4725]: I1002 12:35:58.280614 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfrsb\" (UniqueName: \"kubernetes.io/projected/bf9b1af5-3075-45ef-8a53-76a4f3a254c7-kube-api-access-cfrsb\") on node \"crc\" DevicePath \"\"" Oct 02 12:35:58 crc kubenswrapper[4725]: I1002 12:35:58.562011 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" event={"ID":"bf9b1af5-3075-45ef-8a53-76a4f3a254c7","Type":"ContainerDied","Data":"c3d6215c416ac0bfeecee8c0e0d7336c6d0e1b0ae9b906bd99c8137cbebff374"} Oct 02 12:35:58 crc kubenswrapper[4725]: I1002 12:35:58.562085 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3d6215c416ac0bfeecee8c0e0d7336c6d0e1b0ae9b906bd99c8137cbebff374" Oct 02 12:35:58 crc kubenswrapper[4725]: I1002 12:35:58.562100 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-jlqtf" Oct 02 12:35:59 crc kubenswrapper[4725]: I1002 12:35:59.268393 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:35:59 crc kubenswrapper[4725]: E1002 12:35:59.268870 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:36:00 crc kubenswrapper[4725]: I1002 12:36:00.658442 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6wlv8/crc-debug-jlqtf"] Oct 02 12:36:00 crc kubenswrapper[4725]: I1002 12:36:00.673824 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6wlv8/crc-debug-jlqtf"] Oct 02 12:36:01 crc kubenswrapper[4725]: I1002 12:36:01.286397 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9b1af5-3075-45ef-8a53-76a4f3a254c7" path="/var/lib/kubelet/pods/bf9b1af5-3075-45ef-8a53-76a4f3a254c7/volumes" Oct 02 12:36:01 crc kubenswrapper[4725]: I1002 12:36:01.892241 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6wlv8/crc-debug-nhvsx"] Oct 02 12:36:01 crc kubenswrapper[4725]: E1002 12:36:01.893101 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9b1af5-3075-45ef-8a53-76a4f3a254c7" containerName="container-00" Oct 02 12:36:01 crc kubenswrapper[4725]: I1002 12:36:01.893132 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9b1af5-3075-45ef-8a53-76a4f3a254c7" containerName="container-00" Oct 02 12:36:01 crc kubenswrapper[4725]: I1002 12:36:01.893347 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9b1af5-3075-45ef-8a53-76a4f3a254c7" containerName="container-00" Oct 02 12:36:01 crc kubenswrapper[4725]: I1002 12:36:01.893983 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" Oct 02 12:36:01 crc kubenswrapper[4725]: I1002 12:36:01.946982 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-host\") pod \"crc-debug-nhvsx\" (UID: \"c571149e-6fe9-4121-89b9-61dc8b0a4fcb\") " pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" Oct 02 12:36:01 crc kubenswrapper[4725]: I1002 12:36:01.947249 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v6hb\" (UniqueName: \"kubernetes.io/projected/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-kube-api-access-9v6hb\") pod \"crc-debug-nhvsx\" (UID: \"c571149e-6fe9-4121-89b9-61dc8b0a4fcb\") " pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" Oct 02 12:36:02 crc kubenswrapper[4725]: I1002 12:36:02.049494 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-host\") pod \"crc-debug-nhvsx\" (UID: \"c571149e-6fe9-4121-89b9-61dc8b0a4fcb\") " pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" Oct 02 12:36:02 crc kubenswrapper[4725]: I1002 12:36:02.049578 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v6hb\" (UniqueName: \"kubernetes.io/projected/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-kube-api-access-9v6hb\") pod \"crc-debug-nhvsx\" (UID: \"c571149e-6fe9-4121-89b9-61dc8b0a4fcb\") " pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" Oct 02 12:36:02 crc kubenswrapper[4725]: I1002 12:36:02.049716 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-host\") pod \"crc-debug-nhvsx\" (UID: \"c571149e-6fe9-4121-89b9-61dc8b0a4fcb\") " pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" Oct 02 12:36:02 crc kubenswrapper[4725]: I1002 12:36:02.080249 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v6hb\" (UniqueName: \"kubernetes.io/projected/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-kube-api-access-9v6hb\") pod \"crc-debug-nhvsx\" (UID: \"c571149e-6fe9-4121-89b9-61dc8b0a4fcb\") " pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" Oct 02 12:36:02 crc kubenswrapper[4725]: I1002 12:36:02.214897 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" Oct 02 12:36:02 crc kubenswrapper[4725]: I1002 12:36:02.595750 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" event={"ID":"c571149e-6fe9-4121-89b9-61dc8b0a4fcb","Type":"ContainerStarted","Data":"5341023b3dd7ece9fa5e70657107b0e6b066720818f0a9ae57ea1877584720a9"} Oct 02 12:36:03 crc kubenswrapper[4725]: I1002 12:36:03.608323 4725 generic.go:334] "Generic (PLEG): container finished" podID="c571149e-6fe9-4121-89b9-61dc8b0a4fcb" containerID="82658cd76fd87f5e3cde45fcf7fa0cb8a8f20eea891be6290d51673833b71f46" exitCode=0 Oct 02 12:36:03 crc kubenswrapper[4725]: I1002 12:36:03.608382 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" event={"ID":"c571149e-6fe9-4121-89b9-61dc8b0a4fcb","Type":"ContainerDied","Data":"82658cd76fd87f5e3cde45fcf7fa0cb8a8f20eea891be6290d51673833b71f46"} Oct 02 12:36:03 crc kubenswrapper[4725]: I1002 12:36:03.655224 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6wlv8/crc-debug-nhvsx"] Oct 02 12:36:03 crc kubenswrapper[4725]: I1002 12:36:03.668458 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6wlv8/crc-debug-nhvsx"] Oct 02 12:36:04 crc kubenswrapper[4725]: I1002 12:36:04.720562 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" Oct 02 12:36:04 crc kubenswrapper[4725]: I1002 12:36:04.802241 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v6hb\" (UniqueName: \"kubernetes.io/projected/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-kube-api-access-9v6hb\") pod \"c571149e-6fe9-4121-89b9-61dc8b0a4fcb\" (UID: \"c571149e-6fe9-4121-89b9-61dc8b0a4fcb\") " Oct 02 12:36:04 crc kubenswrapper[4725]: I1002 12:36:04.802337 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-host\") pod \"c571149e-6fe9-4121-89b9-61dc8b0a4fcb\" (UID: \"c571149e-6fe9-4121-89b9-61dc8b0a4fcb\") " Oct 02 12:36:04 crc kubenswrapper[4725]: I1002 12:36:04.802481 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-host" (OuterVolumeSpecName: "host") pod "c571149e-6fe9-4121-89b9-61dc8b0a4fcb" (UID: "c571149e-6fe9-4121-89b9-61dc8b0a4fcb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 12:36:04 crc kubenswrapper[4725]: I1002 12:36:04.802871 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-host\") on node \"crc\" DevicePath \"\"" Oct 02 12:36:04 crc kubenswrapper[4725]: I1002 12:36:04.807400 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-kube-api-access-9v6hb" (OuterVolumeSpecName: "kube-api-access-9v6hb") pod "c571149e-6fe9-4121-89b9-61dc8b0a4fcb" (UID: "c571149e-6fe9-4121-89b9-61dc8b0a4fcb"). InnerVolumeSpecName "kube-api-access-9v6hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:36:04 crc kubenswrapper[4725]: I1002 12:36:04.904757 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v6hb\" (UniqueName: \"kubernetes.io/projected/c571149e-6fe9-4121-89b9-61dc8b0a4fcb-kube-api-access-9v6hb\") on node \"crc\" DevicePath \"\"" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.250690 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/util/0.log" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.278505 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c571149e-6fe9-4121-89b9-61dc8b0a4fcb" path="/var/lib/kubelet/pods/c571149e-6fe9-4121-89b9-61dc8b0a4fcb/volumes" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.429947 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/pull/0.log" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.445622 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/pull/0.log" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.624928 4725 scope.go:117] "RemoveContainer" containerID="82658cd76fd87f5e3cde45fcf7fa0cb8a8f20eea891be6290d51673833b71f46" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.624969 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/crc-debug-nhvsx" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.658033 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/extract/0.log" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.707173 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/util/0.log" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.710884 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/util/0.log" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.726540 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6bf20e6e0dacc007533f714c99cd1e1d919caab53535214f8e41ffebb6x8h2z_ae4a4227-820c-48fd-a32d-7e62caaa222b/pull/0.log" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.881178 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-n5pzn_a51da7c1-9136-40c8-851a-f7c2d1f7a644/kube-rbac-proxy/0.log" Oct 02 12:36:05 crc kubenswrapper[4725]: I1002 12:36:05.950618 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-n5pzn_a51da7c1-9136-40c8-851a-f7c2d1f7a644/manager/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.047627 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-6hxv6_2a1bf314-ad40-4055-8373-b05888c06791/kube-rbac-proxy/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.078443 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-6hxv6_2a1bf314-ad40-4055-8373-b05888c06791/manager/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.128490 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-rnbs8_7306cbd5-07f3-48a7-a865-752417bf2e8e/kube-rbac-proxy/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.236570 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-rnbs8_7306cbd5-07f3-48a7-a865-752417bf2e8e/manager/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.348513 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-l56v9_827de292-bc8c-40da-be5f-443d06e48782/kube-rbac-proxy/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.421858 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-l56v9_827de292-bc8c-40da-be5f-443d06e48782/manager/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.542068 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-cv4rz_e66fa8da-eabe-4fe6-8689-961c09641552/manager/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.548270 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-cv4rz_e66fa8da-eabe-4fe6-8689-961c09641552/kube-rbac-proxy/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.625058 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-sr7pb_dfe403d1-c0bb-4570-8b27-714c65d930af/kube-rbac-proxy/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.724195 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-sr7pb_dfe403d1-c0bb-4570-8b27-714c65d930af/manager/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.790768 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-9d6c5db85-nzp5n_2d4f9b95-e805-4def-bd1c-35b262ebd01f/kube-rbac-proxy/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.925413 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5cd4858477-hfz8s_57843ab0-f141-436e-847c-71f339bb736b/kube-rbac-proxy/0.log" Oct 02 12:36:06 crc kubenswrapper[4725]: I1002 12:36:06.943749 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-9d6c5db85-nzp5n_2d4f9b95-e805-4def-bd1c-35b262ebd01f/manager/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.001595 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5cd4858477-hfz8s_57843ab0-f141-436e-847c-71f339bb736b/manager/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.120523 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-bbfd9_9d557980-a1fc-4123-9a45-351264ad1fbc/kube-rbac-proxy/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.174501 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-bbfd9_9d557980-a1fc-4123-9a45-351264ad1fbc/manager/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.294743 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-zcsfx_023a7a0e-9279-4b9b-ba5d-6cd41b2aa729/kube-rbac-proxy/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.337182 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-zcsfx_023a7a0e-9279-4b9b-ba5d-6cd41b2aa729/manager/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.409210 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-d7btl_dd3980d8-2ea7-4dd5-9604-9e09025e4220/kube-rbac-proxy/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.502608 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-d7btl_dd3980d8-2ea7-4dd5-9604-9e09025e4220/manager/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.561503 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-849d5b9b84-gx29d_81a57946-838b-45e0-8a00-a7b50950db67/kube-rbac-proxy/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.656494 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-849d5b9b84-gx29d_81a57946-838b-45e0-8a00-a7b50950db67/manager/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.731286 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-64cd67b5cb-nlk7p_f4918ab0-3268-4081-bdf8-05df0b51e62b/kube-rbac-proxy/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.824151 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-64cd67b5cb-nlk7p_f4918ab0-3268-4081-bdf8-05df0b51e62b/manager/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.949798 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7b787867f4-5hpzt_44910e65-f73b-4454-bd9d-8fbbfb18445c/manager/0.log" Oct 02 12:36:07 crc kubenswrapper[4725]: I1002 12:36:07.980506 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7b787867f4-5hpzt_44910e65-f73b-4454-bd9d-8fbbfb18445c/kube-rbac-proxy/0.log" Oct 02 12:36:08 crc kubenswrapper[4725]: I1002 12:36:08.117244 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5869cb545-jwwlx_3732c646-2b59-4238-8466-4c9240bc5b9a/kube-rbac-proxy/0.log" Oct 02 12:36:08 crc kubenswrapper[4725]: I1002 12:36:08.176900 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5869cb545-jwwlx_3732c646-2b59-4238-8466-4c9240bc5b9a/manager/0.log" Oct 02 12:36:08 crc kubenswrapper[4725]: I1002 12:36:08.284125 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8479857cf7-b2ttm_caab214a-7c5d-4d45-bebe-680090c291d8/kube-rbac-proxy/0.log" Oct 02 12:36:08 crc kubenswrapper[4725]: I1002 12:36:08.434622 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-859f658b7-xk7wb_34d68cf3-a46e-4588-abff-0487fe2ceacc/kube-rbac-proxy/0.log" Oct 02 12:36:08 crc kubenswrapper[4725]: I1002 12:36:08.759369 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-859f658b7-xk7wb_34d68cf3-a46e-4588-abff-0487fe2ceacc/operator/0.log" Oct 02 12:36:08 crc kubenswrapper[4725]: I1002 12:36:08.847991 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xxgld_b1d09d2a-fb84-40db-91ed-72875d001d9a/registry-server/0.log" Oct 02 12:36:09 crc kubenswrapper[4725]: I1002 12:36:09.445271 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-6r9zk_165193eb-72d2-44c8-ad3c-12679db734a1/kube-rbac-proxy/0.log" Oct 02 12:36:09 crc kubenswrapper[4725]: I1002 12:36:09.484071 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-6r9zk_165193eb-72d2-44c8-ad3c-12679db734a1/manager/0.log" Oct 02 12:36:09 crc kubenswrapper[4725]: I1002 12:36:09.604206 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-pvtws_fb419c8a-047c-4df7-8120-25624030a3fe/kube-rbac-proxy/0.log" Oct 02 12:36:09 crc kubenswrapper[4725]: I1002 12:36:09.618258 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8479857cf7-b2ttm_caab214a-7c5d-4d45-bebe-680090c291d8/manager/0.log" Oct 02 12:36:09 crc kubenswrapper[4725]: I1002 12:36:09.664075 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-pvtws_fb419c8a-047c-4df7-8120-25624030a3fe/manager/0.log" Oct 02 12:36:09 crc kubenswrapper[4725]: I1002 12:36:09.704311 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-jhggf_53be820c-d953-4996-96da-4cec8d6b3bf0/operator/0.log" Oct 02 12:36:09 crc kubenswrapper[4725]: I1002 12:36:09.835017 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-84d6b4b759-25kj8_c4d00c80-69fb-4507-9e14-2a54cdb0b8c5/kube-rbac-proxy/0.log" Oct 02 12:36:09 crc kubenswrapper[4725]: I1002 12:36:09.896951 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-84d6b4b759-25kj8_c4d00c80-69fb-4507-9e14-2a54cdb0b8c5/manager/0.log" Oct 02 12:36:09 crc kubenswrapper[4725]: I1002 12:36:09.911906 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-fjs4g_6c738c27-b7d2-4e56-b0e5-61c19a279278/kube-rbac-proxy/0.log" Oct 02 12:36:10 crc kubenswrapper[4725]: I1002 12:36:10.080258 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-fjs4g_6c738c27-b7d2-4e56-b0e5-61c19a279278/manager/0.log" Oct 02 12:36:10 crc kubenswrapper[4725]: I1002 12:36:10.098702 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-85777745bb-2mchd_d3b254cf-3771-426e-9211-9cd279379d73/kube-rbac-proxy/0.log" Oct 02 12:36:10 crc kubenswrapper[4725]: I1002 12:36:10.103033 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-85777745bb-2mchd_d3b254cf-3771-426e-9211-9cd279379d73/manager/0.log" Oct 02 12:36:10 crc kubenswrapper[4725]: I1002 12:36:10.209580 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b9957f54f-fp4qx_67d56c77-e0a6-4841-9c57-2afc39fcf9db/kube-rbac-proxy/0.log" Oct 02 12:36:10 crc kubenswrapper[4725]: I1002 12:36:10.892419 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b9957f54f-fp4qx_67d56c77-e0a6-4841-9c57-2afc39fcf9db/manager/0.log" Oct 02 12:36:13 crc kubenswrapper[4725]: I1002 12:36:13.268658 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:36:13 crc kubenswrapper[4725]: E1002 12:36:13.269199 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:36:28 crc kubenswrapper[4725]: I1002 12:36:28.268580 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:36:28 crc kubenswrapper[4725]: E1002 12:36:28.269426 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:36:28 crc kubenswrapper[4725]: I1002 12:36:28.691805 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2hrs6_825215a6-1ebc-426c-b54d-f54f6c261f55/control-plane-machine-set-operator/0.log" Oct 02 12:36:28 crc kubenswrapper[4725]: I1002 12:36:28.874358 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ps6mz_c36f3900-450a-437f-9fda-b3c7ccf6b4be/kube-rbac-proxy/0.log" Oct 02 12:36:28 crc kubenswrapper[4725]: I1002 12:36:28.941106 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ps6mz_c36f3900-450a-437f-9fda-b3c7ccf6b4be/machine-api-operator/0.log" Oct 02 12:36:38 crc kubenswrapper[4725]: I1002 12:36:38.358357 4725 scope.go:117] "RemoveContainer" containerID="a52550984292e1e09e43056d7481f8c32ffeb8ae428e0b7bc67dd3c5e3465791" Oct 02 12:36:38 crc kubenswrapper[4725]: I1002 12:36:38.379679 4725 scope.go:117] "RemoveContainer" containerID="5da4f8749590863265217ec12f81d9cfe21f620e449f2a447fd401e5a24cde03" Oct 02 12:36:38 crc kubenswrapper[4725]: I1002 12:36:38.429577 4725 scope.go:117] "RemoveContainer" containerID="089413101e342c6e444336e5993c6481b158f74eae26ff6e50202ab6e5dab435" Oct 02 12:36:40 crc kubenswrapper[4725]: I1002 12:36:40.268236 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:36:40 crc kubenswrapper[4725]: E1002 12:36:40.268836 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:36:42 crc kubenswrapper[4725]: I1002 12:36:42.675178 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-smqhv_c29a13f6-36a6-49c4-b16d-df2fdcda469b/cert-manager-cainjector/0.log" Oct 02 12:36:42 crc kubenswrapper[4725]: I1002 12:36:42.700322 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-pjgh6_df4f98fb-c0cd-4e39-b9fd-ec6500f6644e/cert-manager-controller/0.log" Oct 02 12:36:42 crc kubenswrapper[4725]: I1002 12:36:42.855528 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-wmnqx_c3b045dd-0aa9-4c6b-8930-16c6ae444847/cert-manager-webhook/0.log" Oct 02 12:36:51 crc kubenswrapper[4725]: I1002 12:36:51.268178 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:36:51 crc kubenswrapper[4725]: E1002 12:36:51.269057 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:36:55 crc kubenswrapper[4725]: I1002 12:36:55.101238 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-bm7x4_76aebb91-1906-47b5-8efc-f4e8290b9ffb/nmstate-console-plugin/0.log" Oct 02 12:36:55 crc kubenswrapper[4725]: I1002 12:36:55.274678 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mg76p_7d2d0930-f603-4f33-9da1-f7d372d70912/nmstate-handler/0.log" Oct 02 12:36:55 crc kubenswrapper[4725]: I1002 12:36:55.367321 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-xgxbz_36df4296-bcd2-4c2a-b6f7-eef03e21d934/nmstate-metrics/0.log" Oct 02 12:36:55 crc kubenswrapper[4725]: I1002 12:36:55.368519 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-xgxbz_36df4296-bcd2-4c2a-b6f7-eef03e21d934/kube-rbac-proxy/0.log" Oct 02 12:36:55 crc kubenswrapper[4725]: I1002 12:36:55.496821 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-mdqp5_b4436bab-3a23-4c24-bb5b-fdd06e5c2b78/nmstate-operator/0.log" Oct 02 12:36:55 crc kubenswrapper[4725]: I1002 12:36:55.596078 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-c8s4z_7ff10093-4a58-4838-88a7-cb77f8ae577a/nmstate-webhook/0.log" Oct 02 12:37:04 crc kubenswrapper[4725]: I1002 12:37:04.268468 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:37:04 crc kubenswrapper[4725]: E1002 12:37:04.269305 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lv8cx_openshift-machine-config-operator(1e9bad7c-78f8-435d-8449-7c5b04a16869)\"" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" Oct 02 12:37:09 crc kubenswrapper[4725]: I1002 12:37:09.681325 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-lsjhw_b130a44e-650b-4940-a3fa-392c5f797d6f/kube-rbac-proxy/0.log" Oct 02 12:37:09 crc kubenswrapper[4725]: I1002 12:37:09.823135 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-lsjhw_b130a44e-650b-4940-a3fa-392c5f797d6f/controller/0.log" Oct 02 12:37:09 crc kubenswrapper[4725]: I1002 12:37:09.889331 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-frr-files/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.035508 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-frr-files/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.047023 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-reloader/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.082983 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-reloader/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.122256 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-metrics/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.282649 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-metrics/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.298845 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-frr-files/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.300613 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-metrics/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.341063 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-reloader/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.525075 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-metrics/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.531932 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-reloader/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.540308 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/controller/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.541976 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/cp-frr-files/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.696288 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/kube-rbac-proxy/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.717518 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/frr-metrics/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.730002 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/kube-rbac-proxy-frr/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.862940 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/reloader/0.log" Oct 02 12:37:10 crc kubenswrapper[4725]: I1002 12:37:10.973500 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-m8jtf_88871cd6-3d0d-4bfb-bf21-f4a6c35d9ac1/frr-k8s-webhook-server/0.log" Oct 02 12:37:11 crc kubenswrapper[4725]: I1002 12:37:11.136687 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8cc8c8574-jxrnl_ceb5035d-4044-42f3-be35-b3f861ba059c/manager/0.log" Oct 02 12:37:11 crc kubenswrapper[4725]: I1002 12:37:11.358559 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-757dfd7686-7m9m2_6cada9d9-3b84-4015-b1c9-7bf3de0debbb/webhook-server/0.log" Oct 02 12:37:11 crc kubenswrapper[4725]: I1002 12:37:11.503858 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l2ln9_58368f71-69e1-4da4-9a08-0c7c5b093c4d/kube-rbac-proxy/0.log" Oct 02 12:37:11 crc kubenswrapper[4725]: I1002 12:37:11.960437 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l2ln9_58368f71-69e1-4da4-9a08-0c7c5b093c4d/speaker/0.log" Oct 02 12:37:12 crc kubenswrapper[4725]: I1002 12:37:12.166743 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm5qd_177207e1-c514-4ece-ab43-249bf5253dd6/frr/0.log" Oct 02 12:37:17 crc kubenswrapper[4725]: I1002 12:37:17.269080 4725 scope.go:117] "RemoveContainer" containerID="f6a5548542b916c7be9bb474979844ba54706a84584b5c6fc3fc74a8ec21048b" Oct 02 12:37:18 crc kubenswrapper[4725]: I1002 12:37:18.327331 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" event={"ID":"1e9bad7c-78f8-435d-8449-7c5b04a16869","Type":"ContainerStarted","Data":"8e7583f769a40798ba86682e2ab1263a400bb031a9475d3229f807b442cd39fe"} Oct 02 12:37:25 crc kubenswrapper[4725]: I1002 12:37:25.599927 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/util/0.log" Oct 02 12:37:25 crc kubenswrapper[4725]: I1002 12:37:25.755051 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/pull/0.log" Oct 02 12:37:25 crc kubenswrapper[4725]: I1002 12:37:25.756262 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/pull/0.log" Oct 02 12:37:25 crc kubenswrapper[4725]: I1002 12:37:25.759950 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/util/0.log" Oct 02 12:37:25 crc kubenswrapper[4725]: I1002 12:37:25.923269 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/util/0.log" Oct 02 12:37:25 crc kubenswrapper[4725]: I1002 12:37:25.924983 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/extract/0.log" Oct 02 12:37:25 crc kubenswrapper[4725]: I1002 12:37:25.970484 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lc598_3ca3b9f7-62cf-4bd9-807d-e9ab02d98327/pull/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.063194 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-utilities/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.264903 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-utilities/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.272412 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-content/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.291601 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-content/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.439464 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-content/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.440767 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/extract-utilities/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.565360 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sqkzj_1b967738-3f35-4166-98e2-bc3face118fd/registry-server/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.666820 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-utilities/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.813059 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-utilities/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.844513 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-content/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.858079 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-content/0.log" Oct 02 12:37:26 crc kubenswrapper[4725]: I1002 12:37:26.992524 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-content/0.log" Oct 02 12:37:27 crc kubenswrapper[4725]: I1002 12:37:27.097580 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/extract-utilities/0.log" Oct 02 12:37:27 crc kubenswrapper[4725]: I1002 12:37:27.230300 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/util/0.log" Oct 02 12:37:27 crc kubenswrapper[4725]: I1002 12:37:27.397819 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/util/0.log" Oct 02 12:37:27 crc kubenswrapper[4725]: I1002 12:37:27.450815 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/pull/0.log" Oct 02 12:37:27 crc kubenswrapper[4725]: I1002 12:37:27.498360 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/pull/0.log" Oct 02 12:37:27 crc kubenswrapper[4725]: I1002 12:37:27.747985 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/pull/0.log" Oct 02 12:37:27 crc kubenswrapper[4725]: I1002 12:37:27.752333 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pn4rg_abe00bd7-b29e-4266-8a7d-64a79f500125/registry-server/0.log" Oct 02 12:37:27 crc kubenswrapper[4725]: I1002 12:37:27.772834 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/util/0.log" Oct 02 12:37:27 crc kubenswrapper[4725]: I1002 12:37:27.779307 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6bhvz_f1f9f2b2-614f-454e-8f94-af2108154130/extract/0.log" Oct 02 12:37:27 crc kubenswrapper[4725]: I1002 12:37:27.945581 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5zrbg_6b1730b0-4eb3-4a40-86a6-2908a9c9acb2/marketplace-operator/0.log" Oct 02 12:37:27 crc kubenswrapper[4725]: I1002 12:37:27.965156 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-utilities/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.181160 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-utilities/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.197275 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-content/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.217466 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-content/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.387906 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-utilities/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.428353 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/extract-content/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.496428 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lz4c7_c456f806-c830-4eac-bb7b-5c5666bfbd77/registry-server/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.569942 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-utilities/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.765517 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-content/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.765623 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-utilities/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.807141 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-content/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.919336 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-utilities/0.log" Oct 02 12:37:28 crc kubenswrapper[4725]: I1002 12:37:28.959115 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/extract-content/0.log" Oct 02 12:37:29 crc kubenswrapper[4725]: I1002 12:37:29.560577 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2tkn8_81dcdac7-80b5-4b66-9620-1b4a9619b47b/registry-server/0.log" Oct 02 12:39:23 crc kubenswrapper[4725]: I1002 12:39:23.671544 4725 generic.go:334] "Generic (PLEG): container finished" podID="980815d8-d8e4-489d-a7fc-5cd7d48f5df7" containerID="529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18" exitCode=0 Oct 02 12:39:23 crc kubenswrapper[4725]: I1002 12:39:23.671599 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6wlv8/must-gather-m8fzk" event={"ID":"980815d8-d8e4-489d-a7fc-5cd7d48f5df7","Type":"ContainerDied","Data":"529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18"} Oct 02 12:39:23 crc kubenswrapper[4725]: I1002 12:39:23.672921 4725 scope.go:117] "RemoveContainer" containerID="529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18" Oct 02 12:39:24 crc kubenswrapper[4725]: I1002 12:39:24.501710 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6wlv8_must-gather-m8fzk_980815d8-d8e4-489d-a7fc-5cd7d48f5df7/gather/0.log" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.250019 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6wlv8/must-gather-m8fzk"] Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.250751 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6wlv8/must-gather-m8fzk" podUID="980815d8-d8e4-489d-a7fc-5cd7d48f5df7" containerName="copy" containerID="cri-o://385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e" gracePeriod=2 Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.257308 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6wlv8/must-gather-m8fzk"] Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.721037 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6wlv8_must-gather-m8fzk_980815d8-d8e4-489d-a7fc-5cd7d48f5df7/copy/0.log" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.721588 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/must-gather-m8fzk" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.793287 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6wlv8_must-gather-m8fzk_980815d8-d8e4-489d-a7fc-5cd7d48f5df7/copy/0.log" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.794074 4725 generic.go:334] "Generic (PLEG): container finished" podID="980815d8-d8e4-489d-a7fc-5cd7d48f5df7" containerID="385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e" exitCode=143 Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.794137 4725 scope.go:117] "RemoveContainer" containerID="385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.794165 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6wlv8/must-gather-m8fzk" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.817676 4725 scope.go:117] "RemoveContainer" containerID="529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.888404 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zvp5\" (UniqueName: \"kubernetes.io/projected/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-kube-api-access-9zvp5\") pod \"980815d8-d8e4-489d-a7fc-5cd7d48f5df7\" (UID: \"980815d8-d8e4-489d-a7fc-5cd7d48f5df7\") " Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.888549 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-must-gather-output\") pod \"980815d8-d8e4-489d-a7fc-5cd7d48f5df7\" (UID: \"980815d8-d8e4-489d-a7fc-5cd7d48f5df7\") " Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.893663 4725 scope.go:117] "RemoveContainer" containerID="385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e" Oct 02 12:39:35 crc kubenswrapper[4725]: E1002 12:39:35.894695 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e\": container with ID starting with 385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e not found: ID does not exist" containerID="385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.894756 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e"} err="failed to get container status \"385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e\": rpc error: code = NotFound desc = could not find container \"385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e\": container with ID starting with 385d864f4d1f7bcfcdf9de24cc972e8889d6aba635300ad4b3a8d4a35a7a860e not found: ID does not exist" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.894790 4725 scope.go:117] "RemoveContainer" containerID="529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18" Oct 02 12:39:35 crc kubenswrapper[4725]: E1002 12:39:35.895030 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18\": container with ID starting with 529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18 not found: ID does not exist" containerID="529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.895052 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18"} err="failed to get container status \"529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18\": rpc error: code = NotFound desc = could not find container \"529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18\": container with ID starting with 529a809e271eadb7cb621f7712eb136b811f53f4d53c1a9a2b09499c55e1da18 not found: ID does not exist" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.895716 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-kube-api-access-9zvp5" (OuterVolumeSpecName: "kube-api-access-9zvp5") pod "980815d8-d8e4-489d-a7fc-5cd7d48f5df7" (UID: "980815d8-d8e4-489d-a7fc-5cd7d48f5df7"). InnerVolumeSpecName "kube-api-access-9zvp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 12:39:35 crc kubenswrapper[4725]: I1002 12:39:35.990558 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zvp5\" (UniqueName: \"kubernetes.io/projected/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-kube-api-access-9zvp5\") on node \"crc\" DevicePath \"\"" Oct 02 12:39:36 crc kubenswrapper[4725]: I1002 12:39:36.076119 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "980815d8-d8e4-489d-a7fc-5cd7d48f5df7" (UID: "980815d8-d8e4-489d-a7fc-5cd7d48f5df7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 12:39:36 crc kubenswrapper[4725]: I1002 12:39:36.092602 4725 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/980815d8-d8e4-489d-a7fc-5cd7d48f5df7-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 02 12:39:37 crc kubenswrapper[4725]: I1002 12:39:37.285622 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="980815d8-d8e4-489d-a7fc-5cd7d48f5df7" path="/var/lib/kubelet/pods/980815d8-d8e4-489d-a7fc-5cd7d48f5df7/volumes" Oct 02 12:39:44 crc kubenswrapper[4725]: I1002 12:39:44.978864 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:39:44 crc kubenswrapper[4725]: I1002 12:39:44.979486 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 12:40:14 crc kubenswrapper[4725]: I1002 12:40:14.978585 4725 patch_prober.go:28] interesting pod/machine-config-daemon-lv8cx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 12:40:14 crc kubenswrapper[4725]: I1002 12:40:14.979136 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lv8cx" podUID="1e9bad7c-78f8-435d-8449-7c5b04a16869" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"